In the case you are working with Big Data using readlines() is not very efficient as it can result in MemoryError because this function loads the entire file into memory, then iterates over it.
A slightly better approach for large files is to use the fileinput module , as follows:
import fileinput
for line in fileinput.input(['sample.txt']):
print(line)
The fileinput.input() call reads lines sequentially, but doesn't keep them in memory after they've been read or even simply so this, since file in Python is iterable.