Sunday 17 November 2013

Data scraping tool for non-coding journalists launches

A tool which helps non-coding journalists scrape data from websites has launched in public beta today.

Import.io lets you extract data from any website into a spreadsheet simply by mousing over a few rows of information.

Until now import.io, which we reported on back in April, has been available in private developer preview and has been Windows only. It is now also available for Mac and is open to all.

Although import.io plans to charge for some services at a later date, there will always be a free option.

The London-based start-up is trying to solve the problem of the fact that there is "lots of data on the web, but it's difficult to get at", Andrew Fogg, founder of import.io, said in a webinar last week.

Those with the know-how can write a scraper or use an API to get at data, Fogg said. "But imagine if you could turn any website into a spreadsheet or API."

Uses for journalists

Journalists can find stories in data. For example, if I wanted to do a story on the type of journalism jobs being advertised and the salaries offered, I could research this by looking at various websites which advertise journalism jobs.

If I were to gather the data from four different jobs boards and enter the information manually into a spreadsheet it would take would take hours if not days; if I were to write a screen scraper for each of the sites it would require knowledge and would probably take a couple of hours. Using import.io I can create a single dataset from multiple sources in a few minutes.

I can then search and sort the dataset and find out different facts, such as how many unpaid internships are advertised, or how many editors are currently being sought.

How it works

When you download the import.io application you see a web browser. This browser allows you to enter a URL for any site you want to scrape data from.

To take the example of the jobs board, this is structured data, with the job role, description and salaries displayed.

The first step is to set up 'connectors' and to do this you need to teach the system where the data is on the page. This is done by hitting a 'record' button on the right of the browser window and mousing over a few examples, in this case advertised jobs. You then click 'train rows'.

It takes between two and five examples to teach import.io where all of the rows are, Fogg explained in the webinar.

The next step is to declare the type of data and add column names. For example there may be columns for 'job title', 'job description' and 'salary'. Data is then extracted into the table below the browser window.

Data from different websites can then be "mixed" into a single searchable database.

In the example used in the webinar, Fogg demonstrated how import.io could take data relating to rucksacks for sale on a shopping website. The tool can learn the "extraction pattern", Fogg explained, and apply that to to another product. So rather than mousing over the different rows of sleeping bags advertised, for example, import.io was automatically able to detect where the price and product details were on the page as it had learnt the structure from how the rucksacks were organised. The really smart bit is that the data from all products can then be automatically scraped and pulled into the spreadsheet. You can then search 'shoes' and find the data has already been pulled into your database.

When a site changes its code a screen scraper would become ineffective. Import.io has a "resilience to change", Fogg said. It runs tests twice a day and users get notified of any changes and can retrain a connector.

It is worth noting that a site that has been scraped will be able to detect that import.io has extracted the data as it will appear in the source site's web logs.

Case studies

A few organisations have already used import.io for data extraction. Fogg outlined three.

    British Red Cross

The British Red Cross wanted to create an iPhone app with data from the NHS Choices website. The NHS wanted the charity to use the data but the health site does not have an API.

By using import.io, data was scraped from the NHS site. The app is now in the iTunes store and users can use it to enter a postcode to find hospital information based on the data from the NHS site.

"It allowed them to build an API for a website where there wasn't one," Fogg said.

    Hewlett Packard

Fogg explained that Hewlett Packard wanted to monitor the prices of its laptops on retailers' websites.

They used import.io to scrape the data from the various sites and were able monitor the prices at which the laptops were being sold in real-time.

    Recruitment site

A US recruitment firm wanted to set up a system so that when any job vacancy appeared on a competitor's website, they could extract the details and push that into their Salesforce software. The initial solution was to write scrapers, Fogg said, but this was costly and in the end they gave up. Instead they used import.io to scrape the sites and collate the data.


Source: http://www.journalism.co.uk/news/data-scraping-tool-for-non-coding-journalists-launches/s2/a554002/

Friday 15 November 2013

ScraperWiki lets anyone scrape Twitter data without coding

The Obama administration’s open data mandate announced on Thursday was made all the better by the unveiling of the new ScraperWiki service on Friday. If you’re not familiar with ScraperWiki, it’s a web-scraping service that has been around for a while but has primarily focused on users with some coding chops or data journalists willing to pay to have someone scrape data sets for them. Its new service, though, currently in beta, also makes it possible for anyone to scrape Twitter to create a custom data set without having to write a single line of code.

Taken alone, ScraperWiki isn’t that big of a deal, but it’s part of a huge revolution that has been called the democratization of data. More data is becoming available all the time — whether from the government, corportations or even our own lives — only it’s not of much use unless you’re able to do something with it. ScraperWiki is now one of a growing list of tools dedicated to helping everyone, not just expert data analysts or coders, analyze — and, in its case, generate — the data that matters to them.

After noticing a particularly large numbers of tweets in my stream about flight delays yesterday, I thought I’d test out ScraperWiki’s new Twitter search function by gathering a bunch of tweets directed to @United. The results — from 1,697 tweets dating back to May 3 — are pretty fun to play with, if not that surprising. (Also, I have no idea how far back the tweet search will go or how long it will take using the free account, which is limited to 30 minutes of compute time a day. I just stopped at some point so I could start digging in.)

First things first, I ran my query. Here’s what the data looks like viewed in a table in the ScraperWiki app.

Next, it’s a matter of analyzing it. ScraperWiki lets you view it in a table (like above), export it to Excel or query it using SQL, and will also summarize it for you. This being Twitter data, the natural thing to do seemed to be analyzing it for sentiment. One simple way to do this right inside the ScraperWiki table is to search for a particular term that might suggest joy or anger. I chose a certain four-letter word that begins with f.

Surprisingly, I only found eight instances. Here’s my favorite: “Your Customer Service is better than a hooker. I paid a bunch of money and you’re still…” (You probably get the idea.)

But if you read my “data for dummies” post from January, you know that we mere mortals have tools at our disposal for dealing with text data in a more refined way. IBM’s Many Eyes service won’t let me score tweets for sentiment, but I can get a pretty good idea overall by looking at how words are used. For this job, though, a simple word cloud won’t work, even after filtering out common words, @united and other obvious terms. Think of how “thanks” can be used sarcastically and you can see why.

Using the customized word tree, you can see that “thanks” sometimes means “thanks.” Other times, not so much. I know it’s easy to dwell on the negative, but consider this: “worst” had 28 hits while “best” had 15. One of those was referring to Tito’s vodka and at least three were referring to skyline views. (Click here to access it and search by whatever word you want.)

Here’s a phrase net filtering the results by phrases where the word “for” connects two words.

Anyhow, this was just a fast, simple and fairly crude example of what ScraperWiki now allows users to do, and how that resulting data can be combined with other tools to analyze and visualize it. Obviously, it’s more powerful if you can code, but new tools are supposedly on the way (remember, this is just a beta version) that should make it easier to scrape data from even more sources.

In the long term, though, services like ScraperWiki should become a lot more valuable as tools for helping us generate and analyze data rather than just believe what we’re told. Want to improve your small business, put your life in context or perhaps just write the best book report your teacher has ever seen? It’s getting easier every day.


Source: http://gigaom.com/2013/05/10/scraperwiki-lets-anyone-scrape-twitter-data-without-coding/

Thursday 14 November 2013

What is data scraping and how can I stop it?

Data scraping (also called web scraping) is the process of extracting information from websites. Data scraping focuses on transforming unstructured website content (usually HTML) into structured data which can be stored in a database or spreadsheet.

The way data is scraped from a website is similar to that used by search bots – human web browsing is simulated by using programs (bots) which extract (scrape) the data from a website.

Unfortunately, there is no efficient way to fully protect your website from data scraping. This is so because data scraping programs (also called data scrapers or web scrapers) obtain the same information as your regular web visitors.

Even if you block the IP address of a data scraper, this will not prevent it from accessing your website. Most data scraping bots use large IP address pools and automatically switch the IP address in case one IP gets blocked. And if you block too many IPs, you will most probably block many of your legitimate visitors.

One of the best ways to protect globally accessible data on a website is through copyright protection. This way you can legally protect the intellectual ownership of your website content.

Another way to protect your site content is to password protect it. This way your website data will be available only to people who can authenticate with the correct username and password.


Source: http://kb.siteground.com/what_is_data_scraping_and_how_can_i_stop_it/

A Guide to Web Scraping Tools

This post is ghost written by one of my bloggers, I outsource 95% of my life.  The author’s views below are entirely his or her own and may not reflect the views of  Gareth James aka SEO Doctor

Web Scrapers are tools designed to extract / gather data in a website via crawling engine usually made in Java, Python, Ruby and other programming languages.Web Scrapers are also called as Web Data Extractor, Data Harvester , Crawler and so on which most of them are web-based or can be installed in local desktops.

Its main purpose is to enable webmasters, bloggers, journalist and virtual assistants to harvest data from a certain website whether text, numbers, contact details and images in a structured way which cannot be done easily thru manual copy and paste method. Typically, it transforms the unstructured data on the web, from HTML format into a structured data stored in a local database or spreadsheet or automates web human browsing.


Source: http://www.garethjames.net/a-guide-to-web-scrapping-tools/

Tuesday 12 November 2013

WP Web Scraper

An easy to implement professional web scraper for WordPress. This can be used to display realtime data from any websites directly into your posts, pages or sidebar. Use this to include realtime stock quotes, cricket or soccer scores or any other generic content. The scraper is an extension of WP_HTTP class for scraping and uses phpQuery or xpath for parsing HTML. Features include:

    Can be easily implemented using the button in the post / page editor.
    Configurable caching of scraped data. Cache timeout in minutes can be defined in minutes for every scrap.
    Configurable Useragent for your scraper can be set for every scrap.
    Scrap output can be displayed thru custom template tag, shortcode in page, post and sidebar (through a text widget).
    Other configurable settings like timeout, disabling shortcode etc.
    Error handling - Silent fail, error display, custom error message or display expired cache.
    Clear or replace a regex pattern from the scrap before output.
    Option to pass post arguments to a URL to be scraped.
    Dynamic conversion of scrap to specified character encoding (using incov) to scrap data from a site using different charset.
    Create scrap pages on the fly using dynamic generation of URLs to scrap or post arguments based on your page's get or post arguments.
    Callback function to parse the scraped data.

For demos and support, visit the WP Web Scraper project page. Comments appreciated.

Tags: curl, html, import, page, phpquery, Post, Realtime, sidebar, stock market, web scraping, xpath   



Source: http://wordpress.org/plugins/wp-web-scrapper/

Monday 11 November 2013

Yellow Page Scraping- How Use Full

In short, technology has changed the world and really changed in the YP industry.

Thanks to the World Wide Web! Now anyone, anywhere, can always access the online YP. Is just a click away, you find the information you need, your city or across the world the best instructors can be mechanical or robotic care hospital locations.

Since its inception in a while with the organization is already one of the most visited sites online Yp. url legitimized their favorite food court / cinema / travel agent / doctor / editors / auto part store / game center provides updated information / settings / layouts / restaurant / hotel / pub / to eat and what not! Hyderabad and around each service can be found here.

Some commentators pointed out that the YP advertising is very expensive compared to that of search engines. So, for a meaningful and ROI of search traffic, as part of a good marketing campaign using the YP? My answer is yes, can and should the YP, local search engine optimization to make to be used as an important component. YP can be used for SEO, and here is some information about how to approach it.

First, why would you use the YP for natural search optimization?
Well, YP, local search companies in the search engines themselves, enjoy the great location. "There / Organic SEO is a fair amount of traffic driven by search engines."
YP sites high page rank and keywords is usually small compared to the sites "trade" types of searches will be able to get. For example, the Google Maps One Box results sites, the highest rank of the following discoveries:

"San Francisco Accountants"," Italian restaurant in Seattle"," Garden Supplies, Atlanta"," Miami Grocery Store".

IYPs provide several portal sites, news, books and many areas of the print YP to find the URL for the big promotion of direct navigation.

However, the questions point to the example I show up, will score well in many cases, companies must adapt their listings in the YP sites. Users click through to the IYP sites, and if no business listings, businesses will not be references.

Now, the question of cost:
Sterling Folx various discussions on the blog point out, YP advertising is expensive, especially in categories that are very popular. My "natural search engine optimization" hat wearing, I suggest that for the first time by optimizing the YP sites is not always cost money. Many of these sites on your company, including a URL to add additional information at no cost, is possible. So, all you can before you pay anything.

As with the major search engines, placement is very important that your entry you want to display to the top of the page. (You can reasonably expect. The YP and look at places where users most of the "Heat Map" of some sort, like search engines, and the sweet spot is close to listing the top of the paginate consider the alternative names may want to improve their rankings.

Biz online directory listings of the evaluation are in order of ranking, though, and this is one area where you could improve any price. Family and friends and others who are positively disposed toward you rate each of these sites ask. Some very satisfied / streaming client? Give them a voucher for future visits and beg them to you to judge on these online sites.

Now, the YP of hyperlinks to sites that help improve your site's Page Rank? In short, it depends on:
(1) If your biz displays information IYP pages are spider and ranking;

(2) link to your site / usable crawl (click on the link NO Followed coated or tracking code, it may not be search engine friendly) is.


Source: http://goarticles.com/article/Yellow-Page-Scraping-How-Use-Full-It-Is/5072397/

Thursday 7 November 2013

Simple method of Data Scrapping

There are so many tools available on the Internet are scraping data. With these tools, without stress, you can download a large amount of data. The last decade, the Internet revolution as an information center was the world. You can get any information on the Internet. However, if you want to work with specific information, you must find other sites. Download all the information on the website that interests you, then you must copy the information in the document header. Everything seems to work a bit "more difficult. With scraping tools, your time, save money and can reduce manual labor.

Tools for extracting Web data to extract data from HTML pages and Web sites to compare data. Each day, there are many sites are hosted on the Internet. You can not see all the sites the same day. These data mining tools, you can view all pages on the Internet. If you use a wide range of applications, the scraping tool is also useful for you.

Software tools for data retrieval for structured data that is used on the Internet. There are so many Internet search engines to help you find a site for a particular problem would be. Various sites, the data appears in different styles. The expert scraped help you compare the different sites and structures for recording updated data.

And the web crawler software tool is used to index the Web pages on the Internet, moving data to the Internet from your hard drive. With this work, you can surf the Internet much faster than they are connected. It is time to use the tip of the device is important if you try to download data from the Internet. It will take considerable time to download. However, the device with faster Internet rate. There you can download all the corporate data of the person is another tool called e-mail extractor. The tribute, you can easily target your e-mail client. Each time your product is able to send targeted advertisements to customers. The customer database to find the best equipment.

Scraping and data extraction can be used in any organization, corporation, or any company which is a data set targeted customer industry, company, or anything that is available on the net as some data, such as e-ID mail data, site name, search term or what is available on the web. In most cases, data scraping and data mining services, not a product of industry, are marketed and used for example to reach targeted customers as a marketing company, if company X, the city has a restaurant in California, the software relationship that the city's restaurants in California and use that information for marketing your product to market-type restaurant company can extract the data.

MLM and marketing network using data mining and data services to each potential customer for a new client by extracting the data, and call customer service, postcard, e-mail marketing, and thus produce large networks to send large groups of construction companies and their products.

However, there are tolls are scraping on the Internet. And some sites have reliable information about these tools. By paying a nominal amount to download these tools.


Source: http://goarticles.com/article/Simple-method-of-Data-Scrapping/4692026/