This behavior is expected, as the trained network and the trained parameters are specifically trained for the training data. There could be many parameters that may vary in your test videos from the training data set. There is no direct way to use the pretrained "laneNet" series network on the custom videos.
You can also take the pretrained network and use it as a starting point to learn a new task by performing transfer learning. Fine-tuning the pretrained network with transfer learning is usually much faster and easier than training a network with randomly initialized weights from scratch. You can quickly transfer the learned features to a new task using a smaller number of training data.
You can refer to the following link for more information on how to use transfer learning to retrain AlexNet, a pretrained convolutional neural network, to classify a new set of images:
Another possible solution for lane detection is you can use the Ground Truth Labeler app in Automated Driving System Toolbox to label a set of your custom training data with the ground truth representing the right and left lane boundaries.
You can refer to the following links for more information on Deep Learning for lane detection and automating the ground truth labeling of lane boundaries respectively:
Also, you can define camera configurations to perform lane detection correctly on your video. You can refer to the following link for an example that shows how to construct a monocular camera sensor simulation capable of lane boundary detection: