网页抓取转化率和图形化[关闭]

I've come to need a way of tracking and recording conversion rates of a fake currency on Roblox. I think javascript is suitable for the tracking part, however, I'd like to also record that data in a spreadsheet and create a refreshing graph as the data comes in. The resolution of the graph will be at most of 5 second intervals. If you take a look at the Roblox conversion page below, you can see the current positions made by other people to trade their robux or tix to robux or tix. The page displays the top trading positions made by others. I only want to track the conversion rates at the top of each column as shown on the trade currency page.

If I were to create a program locally stored on my computer, what language(s)/program(s) should I use to accomplish this? Also, if I were to host a server (Using my raspberry pi or free hosting service), what language(s)/script(s) should I use to accomplish this? Lastly, if I were to make this an online thing (With my R-Pi or free hosting), I would like to access the graph through a browser, either on my network or on the internet (Which I've done before when creating a website for my R-Pi).

Thank you for your time in reading, Cameron

Link to the conversion rate page

EDIT: I now know you can't see the trade currency page without logging in with an account (Which is free). You can see an image here and a wiki page here. I've decided to use Bubby4j's answer which supplied me with a helpful system that already did what I asked for. I now only need to fix it (As it may be outdated) and get it running on a server.

I actually wrote a PHP script that does this. It uses MySQL to store the Trade Currency data.

https://www.dropbox.com/sh/c46wuzhf7636htc/AAALYLvzpbnBzK2qSjfybcGxa?dl=0

There's a zip of what I had. I don't know if it still works but it might be useful.

It even logs into Roblox automatically.

I used a cronjob every 5 seconds to trigger the script that recorded the current TC rates.

The included .sql file contains the structure of the database & some sample data.

You'll likely need to edit quite a few things to get it working, but it should point you in the right direction.

I am not sure about the nature of roblox however what you describe here is called web-crawling, thus in order to accomplish this, there is not a single language, most of them are suitable. what I would do first is check weather roblox provides any usable APIs which are there to help developers such as your self in fetching the data you need, in a more use friendly way such as JSON which you can easily use in any language. in case an API is not available you can try to fetch the data as plain text with various tools such as curl or text based web-browsers in order to determine weather an html parser will suffice or the website requires something more advanced such as a javascript interpeter, and there are such, an headless browser such as phantomjs(also available for use commandline just like curl, with full js support). It is most preffered to limit yourself to just fetching the page, parsing the html and getting the data you need rather than using a full headless browser solution such as phantomjs as the latter has the potential to slow things down and is generally more complex.

for sake of simplicity, since you mentioned that your final result is to make a webserver which serves the data I would go the following way:

  1. install a lemp(linux nginx mysql php) OR lamp(linux apache mysql php) stack. just download it into your linux box using your favourite package manager.

  2. Since the final reault is a web server you might want to use php which comes out of the box in the packages I stated above: In case there is an API, it gets as simple as fetching the page of the supported API and running JSON/XML parse on it and using the data. but if there is no api First fetch the page in php using curl or file get contents functions available in php, then try to parse the page with any HTML parsers available for PHP such as SIMPLE DOM PARSER.

The above steps are in case you don't need to go into the complexities of a full blown browser, but if you do, you should find comfort in phantomjs and try to use it standalone(javascript) to fetch your data, or try to find a php interface to communicate with phantomjs in google. The steps are similar in their approach: fetch the page and parse the html in it in order to get the required data.

  1. Since you already are in a webserver (lemp/lamp) you are in fact already able to preasent a webpage to your devices online. so simpley do step 2, save to the database(mysql) and generate a page matching your need. Note php runs only when the user loads the page on which it resides, therefore if you need periodic checks, just use cron jobs to schedule tasks at certain times and re run your php scripts.

Note 1: the steps above are very general since you did not specify your background in this field. These steps simply describe how web crawling works in general.

Note 2: If you wish to make your service accesible outside of your network, In order to do that you should configure(usually is the default) your webserver(lemp/lamp) at port 80 and then you should provide your users with your outside ip address. If your ip is dynamically changing you can use free services such as NO-IP or maybe this. there are other more complex solutions such as renting a domain name.