May

30

One of the more useful skills one can have, at least if one is a researcher, is knowing how to program a computer to extract online data, e.g. stock market prices.

I personally use VB.NET, but I'm sure most programming languages have built-in functions that make the process quite easy.

I wrote "quite" easy, as in everyone can do it, assuming they know basic programming. An introductory book, or a little tutoring from an experienced programmer, should be sufficient.

Two line are all it takes to download a web page:

Dim wc As New System.Net.WebClient wc.DownloadFile("http://EXAMPLE.com/DOWNLOADME.html", "savedFile.txt")

The above lines tell the computer to save the webpage's source as a text file named "savedFile.txt" in the same directory as the VB.NET program.

Naturally, one wouldn't make a program just to download a single page. It's when one needs to download dozens or more pages that the programming approach pays off. If these pages are numbered (they often are), then all one needs to do is to loop through them, e.g:

For i = 0 to 1000 Dim wc As New System.Net.WebClient wc.DownloadFile("http://EXAMPLE.com/DOWNLOADME.php?id=" & cstr(i), "savedFile-" & cstr(i) & ".txt") Next

With stock market data, one often needs to specify the tickers. Thankfully, this is easily overcome:

Dim tickerList() as String = {"ABC", "XYZ", "JPJ"} For i = 0 to tickerList.getUpperBound(0) Dim wc As New System.Net.WebClient wc.DownloadFile("http://EXAMPLE.com/DOWNLOADME.php?ticker=" & tickerList(i), "savedFile-" & tickerList(i) & ".txt") Next

If neither of these approaches work, then the process is slightly more challenging. One needs to search for links within the downloaded source files. It's doable, but too complicated to include in this text.

Although it's very fast to write the code for downloading webpages, the actual execution is very slow. This varies a lot with the internet line and proximity to the remote server, but a rule of thumb is that one page takes one second to download (one should also consider waiting a a short while between each download). One hour, as you know, exists of 3,600 seconds. One day is 86,400, and one month is 2.6 million seconds.

Because of these time concerns, I almost always download all the raw source files to a hard drive, and I do not manipulate them. You never want to find out that there's a bug in the data extraction algorithm, and then having to do all the downloading again. Once the files are on the hard drive, one can easily read them and then save the relevant information into new files again. Reading a file takes something like a hundredth of second or less. The downside with this approach, is that raw data takes up tremendous amounts of space. But with affordable 1TB external usb-connected drives, this is not a problem.

Although reading files from the HD is many, many times faster than downloading them in the first place, working with data loaded to the memory (RAM, as variables in the program) is many, many times faster than reading and writing files. I therefore prefer to make one, only one, text file (CSV) with all the relevant data from the raw data, and every time the program starts up, this file is loaded. When the program finishes, the manipulated variables are then saved to a text file.

I know I only scratched the surface here, but I hope this short text will inspire other researchers to learn the skill of automated data downloading. Once fluent in instructing computers to do your dirty work, you have an extremely valuable slave at your disposal.

P.S. Some useful codes can be found here.

Work in progress!


Comments

WordPress database error: [Table './dailyspeculations_com_@002d_dailywordpress/wp_comments' is marked as crashed and last (automatic?) repair failed]
SELECT * FROM wp_comments WHERE comment_post_ID = '6410' AND comment_approved = '1' ORDER BY comment_date

Name

Email

Website

Speak your mind

Archives

Resources & Links

Search