The web 3.0

published on

August 29th, 2010 and tagged with

A map of linked data and a preview of Paper.li

The entertainment system was belting out the Beatles’ “We Can Work It Out” when the phone rang. When Pete answered, his phone turned the sound down by sending a message to all the other local devices that had a volume control. His sister, Lucy, was on the line from the doctor’s office:

“Mom needs to see a specialist and then has to have a series of physical therapy sessions. Biweekly or something. I’m going to have my agent set up the appointments.”

Pete immediately agreed to share the chauffeuring.

At the doctor’s office, Lucy instructed her Semantic Web agent through her handheld Web browser. The agent promptly retrieved information about Mom’s prescribed treatment from the doctor’s agent, looked up several lists of providers, and checked for the ones in-plan for Mom’s insurance within a 20-mile radius of her home and with a rating of excellent or very good on trusted rating services.

It then began trying to find a match between available appointment times (supplied by the agents of individual providers through their Web sites) and Pete’s and Lucy’s busy schedules. In a few minutes the agent presented them with a plan.

How could this be possible?

So starts the article on The semantic web from 2001 by Sir Tim Berners-Lee. The sematic web, a giant myth or a dream to come true? It all started around 1999 when Sir Tim Berners-Lee stated the following:

I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize.

Current status of the web

Nowadays there is something boiling beneath the surface, a movement, which is lobbying to get public institutions and corporations  to generate their data in a way accessible to machine interpretation. There are solid protocols to achieve this goal, among them are LinkedData, RDFa, Atom and RSS. You all know RSS and Atom, but they differ hugely from RDFa and LinkedData in that the semantic meaning of the content is not explained.

LinkedData and RDFa changes this, through linking words, concepts, dates and other distinct data-types to a set of common dictionaries and taxonomies, we the people, can explain what this data means.  The really big difference between the two approaches is that it with LinkedData and RDFa becomes possible to deduce what concepts, articles and links mean by referring to other material already interpreted.

And by telling machines what data means, they can help us perform these everyday tasks that’s critical and hard work. Tasks could be setting up a biweekly schedule for chauffeuring, setting up a meeting with many parties or scraping the web to research specific topics automatically instead of doing it all manually.

This all sounds like a holy grail right? But there are applications out there today, and a large movement is working hard at improving the deductive reasoning, getting mass amount of data linked up, and writing dictionaries and taxonomies in a machine-friendly way.

Applications today

Today there actually exist applications, most of them is proof of concept or very early beta.

Further reading:

The linked data image is courtesy of the www.linkeddata.org-project.