lundi 27 février 2012

Le projet ALCOTRA INNOVATION


Le projet stratégique ALCOTRA INNOVATION, co-financé par le programme ALCOTRA de Coopération transfrontalière France-Italie 2007-2013, Axe 1 “ Développement et innovation” , Mesure 1.1 “Systèmes productifs ” a démarré officiellement en septembre 2010 et se terminera en 2013.
Le Chef de file est la Région Piemonte, Direction de l’Innovation, de la Recherche et de l’Université. Le partenariat est formé par la Provincia di Torino, les régions italiennes Vallée d’Aoste et Liguria et par les régions françaises Rhône Alpes et Provence-Alpes-Côte d'Azur.
Le budget global s’élève à 5.948.600,00 €.

Les objectifs du projet sont les suivants :
  • Améliorer les capacités d’innovation des systèmes productifs transfrontaliers pour les rendre compétitifs au niveau international avec des résultats plus performants.
  • Encourager à l’échelon local et transfrontalier la collaboration et la connaissance réciproque entre entreprises, clusters, centres de recherche, universités et institutions pour l’échange de bonnes pratiques et pour des activités de transfert technologique et de recherche-développement.
  • Appuyer les groupes de travail transfrontalier dans l’élaboration et la réalisation éventuelle de projets d’expérimentation fondés dur l’approche innovante Living Lab dans les domaines Intelligent Mobility, Smart Energies, Creative Industries et E-health.
  • Sensibiliser les décideurs lors de l’élaboration de politiques d’innovation user-centric à l’échelon transfrontalier.

En prenant en compte d’un côté les résultats de la cartographie des acteurs de l’innovation et des potentialités des territoires et, de l’autre, les pistes stratégiques des régions concernées, le partenariat a identifié les domaines d’expérimentation indiqués ci-dessous et défini les chefs de file des groupes de travail transfrontaliers qui se sont constitués au cours de l’automne 2011:
  • Intelligent Mobility (Regione Piemonte avec le support de la Regione Liguria et de la Provincia di Torino).
  • Smart Energies (Région Vallée d’Aoste).
  • E-health (Région Provence-Alpes-Côte d'Azur).
  • Creative industries (Région Rhône Alpes).
Chaque groupe de travail prévoit la participation et l’implication active d’entreprises, universités, centres de recherche, pépinières d’entreprises, collectivités locales et utilisateurs finaux(qu’il s’agisse d’individus ou d’entreprises).
Les groupes de travail ont pour objectif d’élaborer des idées d’expérimentation axées sur l’approche innovante Living Lab. Aux meilleurs plans de faisabilité seront par la suite fournies des indications sur les sources de financement les plus appropriées pour mettre en œuvre les idées de projets.
Parallèlement, la Provincia di Torino lance une initiative expérimentale d’animation territoriale visant l’amélioration du niveau de participation du système productif local dans les actions pilotes Living Labs et la diffusion d’informations sur les initiatives des pôles d’innovation présents sur le territoire du projet.
Sur la base des observations et des actions expérimentées tout au long du projet, seront enfin élaborés un plan stratégique transfrontalier de soutien à l’innovation, une analyse de l’état de l’art, des points forts et des points faibles des régions que les décideurs devraient prendre en compte lorsqu’il définissent des stratégies de promotion d’activités de collaboration et de recherche e développement à l’échelon transfrontalier fondées sur la méthodologie Living Lab.

Plus dans le détail, le projet ALCOTRA INNOVATION se décline dans les activités suivantes:
  1. Gestion et coordination (Partenaire responsable : Regione Piemonte).
  2. Analyse de l’état de l’art et identification des bonnes pratiques (Partenaire responsable : Région Provence-Alpes-Côte d'Azur).
  3. Conception des actions pilotes (Partenaire responsable : Region Vallée d’Aoste).
  4. Mise en œuvre des actions pilotes (Partenaire responsable : Regione Piemonte).
  5. Plan stratégique transfrontalier de soutien à l’innovation (Partenaire responsable : Région Rhône Alpes).
  6. Communication du (Partenaire responsable : Regione Liguria).

dimanche 26 février 2012

Alliance in M2M industry


Sensinode Ltd., a leading provider of software that powers the Internet of Things, has announced collaboration with the Telenor Objects unit of the Telenor Group. Telenor Objects will add support for Sensinode’s NanoServices™ solution to its Shepherd™ Managed M2M Services. Additionally, Sensinode has joined the Telenor Objects partner network. The Shepherd platform is a cloud-based solution that allows applications to monitor and control networks of connected objects, such as GPS devices, temperature sensors and heart rate monitors, to give a few examples. Sensinode’s solutions enable development and support of device networks built around the IPv6 protocol and Embedded Web Services. With integration into Shepherd, connected objects that utilize 6LoWPAN and CoAP (the IETF-standard for efficient IPv6 and M2M web services) can be readily integrated into the Shepherd platform. ”Telenor Objects is commited to providing M2M services with telecom network quality,” said Hans Christian Haugli, Chief Executive Officer of Telenor Objects. ”Every part of our service architecture must operate with the robustness and reliability of traditional voice and data communications, and provide a future-proof path based on concepts of open, standards-based networks.” The Sensinode NanoService™ solution provides end-to-end web services optimized for the unique constraints of M2M deployments. It provides a directory and semantic lookup of the web resources of each node, provides transparent proxy services between the traditional large-resource Internet and constrained-resource protocols, and supports an eventing (asynchronous push) model that is critical to the effectiveness of Embedded Web applications. Sensinode provides NanoService Device Libraries for C, C++, Java and Android based platforms, enabling rapid integration of embedded devices with web applications. “Sensinode and Telenor Objects have a common vision of how connected devices will drive innovative services and the key role of managed services in orchestrating The Internet of Things,” said Adam Gould, Chief Executive Officer of Sensinode. “Shepherd is a superb platform for service delivery and we are very excited about working with Telenor Objects to make NanoServices a part of this solution architecture.” Applications in the emerging Smart Grid, home monitoring and security, eHealth, street lighting and many other industrial/consumer segments involving wireless sensing and control are readily accommodated by the Telenor Objects and Sensinode solution.

IPv6 will address the revolution of M2M applications


How prepared is APNIC to deal with the impending IPv4 exhaustion?

Is a transition to IPV6 the only solution? Are the transition costs going to be high?
The APNIC community has established the policies needed for the continuance of IPv4 allocations to support IPv6 deployment and we have been active in providing all necessary information. Our IPv6 policies make address space easy to obtain, and our IPv6 processes have been streamlined to ensure quick and efficient services. IPv6 policies and systems are well established and stable now. We have implemented our own transition so that our services are fully available on IPv6 and we have developed training and educational materials. We have been working hard to encourage and support our community through the transition and we will go on doing this in 2011 and future years while the IPv6 transition is underway.
If we want to continue to operate a network at the price, performance and functional flexibility that is offered by packet switched networks, then the search for alternatives to IPv6 is necessarily constrained to a set of technologies that offer approaches that are, at a suitably abstract level, isomorphic to IP.
However, from abstract observations to a specific protocol design is never a fast or easy process, and the lessons from the genesis of both IPv4 and IPv6 point to a period of many years of design and progressive refinement to come up with a viable approach. In our current context any such redesign is not a viable alternative to IPv6, given the time frame of IPv4 address exhaustion. It’s unlikely that such an effort would elicit a substitute to IPv6, and it’s more likely that such an effort may lead towards an inevitable successor to IPv6, if we dare to contemplate networking technologies further into the future.
Other approaches exist, based around application level gateways and similar forms of mapping of services from one network domain. Like it or not, the pragmatic observation of the present situation is that we don’t have a choice here and that there are no viable substitutes.
What trends do you foresee in IPv6 architecture and deployment? Telecom Engineering Center plans to release IPv6 standards for India sometime soon. What are some key issues related to standards that are likely to crop up?
IPv6 deployment will naturally accelerate now for a smooth transition. Many ISPs are already providing IPv6 services and users may be using IPv6 in many cases without even knowing it.
However, manufacturers need to provide IPv6 support in any and all equipment, which can connect, to the Internet and software developers also need to add support in many cases. This is where standards will come into play.
The Indian government has drawn up a roadmap for moving the country over to IPv6 and it plans to switch all government departments by 2012. What are some of the roadblocks that you see on the way?
All telecom and Internet service providers (ISPs) are required to become IPv6-compliant by December 2011 and offer IPv6 services from then on in India. As part of the roadmap, the government has also decided to form an IPv6 Task Force in Public Private Partnership (PPP) mode for the timely implementation of IPv6 in the country. In addition, government agencies must adopt the new version of the protocol by March 2012.
The transition will pose several challenges. The timing of the process, determining what hardware and software capabilities exist and then planning upgrades, finding and retaining trained staff, security and stability issues that may result from new systems and new or under trained staff will be crucial.
With India looking to focus on the broadband revolution, what opportunities do you see paving the way ahead with IPv6?
The specific opportunity is to leapfrog new network deployments directly to IPv6, without any intermediate technologies that need to be upgraded or replaced. The more general opportunity is in ubiquitous Internet deployment and ample address space to implement state-of-the-art Internet services, now and in the future.
About 52 million urban Indians were active Internet users in September 2010, according to a report released jointly by the Internet and Mobile Association of India, and research firm IMRB International. Active users are those who have used the Internet at least once a month. A move to IPv6 will give a boost to Internet adoption in the country. A lot of equipment like refrigerators, air conditioners and television sets will come onto the IPv6 network and be controlled remotely, creating a potentially large market in India. IPv6 will address the revolution of M2M applications.
Please tell us about your plans for India. Will any R&D or training centers be set up here in the near future?
APNIC provides services to the entire Asia Pacific region, such as address allocation, resource quality assurance and maintaining registrations. In India, we have always concentrated on supporting local organizations by making our expertise and funding available, whether through training or Internet infrastructure deployment. Back in 2009, we launched our e-learning interactive classes that deliver live online tutorials to Indian members.
Since we handle the monitoring of IP address allocation across the entire region from our headquarters in Brisbane, we don’t need R&D centers in specific locations.
APNIC has also supported NIXI in the deployment of a Test Traffic Measurement (TTM) node in India, which provides some vital data for the South Asian region. A key benefit of the TTM systems is that the data can assist local organizations in developing the cheapest and most effective plans for improvement with the ultimate goal of reducing their reliance on overseas service providers.

with Paul Wilson, Director General, Asia Pacific Network Information Center (APNIC) talked to Heena Jhingan about the opportunities created by the transition from IPv4 to IPv6 and the factors that will enable a smooth transition from one to the other

Readium Open Source Initiative Launched to Accelerate EPUB 3 Adoption


Original URL


The International Digital Publishing Forum (IDPF) today announced the Readium Project, a new open source initiative to develop a comprehensive reference implementation of the IDPF EPUB® 3 standard. This vision will be achieved by building on WebKit, the widely adopted open source HTML5 rendering engine.
EPUB, an XML and Web Standards based format developed by the IDPF, has become a key global standard in the rapidly developing digital publishing industry, enabling digital books and publications to be portable across devices and reading systems. EPUB 3, a major revision of the standard, was approved in October 2011 and is available at http://idpf.org/epub/30. The new version aligns EPUB with HTML5 and adds support for video, audio, interactivity, vertical writing and other global language capabilities, improved accessibility, MathML, and styling and layout enhancements. WebKit is an open source rendering engine for HTML5 and related Web Standards. WebKit is utilized as the underlying engine in many web browsers and applications, including Apple Safari, Google Chrome, Apple iBooks, Adobe AIR®, Nokia MeeGo®, HP webOS, and others. Project Readium is focusing on developing a complete reference implementation of EPUB 3 utilizing the WebKit engine. Packaged as a test application for content developers, the Readium codebase will also serve as a steppingstone for commercial reading systems. A proof-of-concept prototype is available now as a Google Chrome browser extension for Windows and Mac OS/X, and the project aims to deliver a feature-complete implementation including an Android® configuration by mid-2012.
"Project Readium will significantly accelerate EPUB 3 adoption and increase implementation consistency," said Bill McCoy, Executive Director of the IDPF. "A universal digital publishing format for the open web benefits the entire industry and ultimately consumers, who want the freedom to read on their choice of applications and devices."
Project Readium sponsors and other industry stakeholders welcome this IDPF-sponsored activity (see attached quote sheet). For more information about the project, including how to participate and links to downloads and source code, visit http://readium.org.


Source :
http://openhealthnews.com/content/readium-open-source-initiative-launched-accelerate-epub-3-adoption

vendredi 24 février 2012

HTTPS a none expensive secured web?


HTTPS isn't (that) expensive any more

Yes, in the hoary old days of the 1999 web, HTTPS was quite computationally expensive. But thanks to 13 years of Moore's Law, that's no longer the case. It's still more work to set up, yes, but consider the real world case of GMail:
In January this year (2010), Gmail switched to using HTTPS for everything by default. Previously it had been introduced as an option, but now all of our users use HTTPS to secure their email between their browsers and Google, all the time. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that.

HTTPS means The Man can't spy on your Internet

Since all the traffic between you and the websites you log in to would now be encrypted, the ability of nefarious evildoers to either …
  • steal your identity cookie
  • peek at what you're doing
  • see what you've typed
  • interfere with the content you send and receive
… is, if not completely eliminated, drastically limited. Regardless of whether you're on open public WiFi or not.
Personally, I don't care too much if people see what I'm doing online since the whole point of a lot of what I do is to … let people see what I'm doing online. But I certainly don't subscribe to the dangerous idea that "only criminals have things to hide"; everyone deserves the right to personal privacy. And there are lots of repressive governments out there who wouldn't hesitate at the chance to spy on what their citizens do online, or worse. Much, much worse. Why not improve the Internet for all of them at once?

HTTPS goes faster now

Security always comes at a cost, and encrypting a web connection is no different. HTTPS is going to be inevitably slower than a regular HTTP connection. But how much slower? It used to be that encrypted content wouldn't be cached in some browsers, butthat's no longer true. And Google's SPDY protocol, intended as a drop-in replacement for HTTP, even goes so far as to bake encryption in by default, and not just for better performance:
[It is a specific technical goal of SPDY to] make SSL the underlying transport protocol, for better security and compatibility with existing network infrastructure. Although SSL does introduce a latency penalty, we believe that the long-term future of the web depends on a secure network connection. In addition, the use of SSL is necessary to ensure that communication across existing proxies is not broken.
There's also SSL False Start which requires a modern browser, but reduces the painful latency inherent in the expensive, but necessary, handshaking required to get encryption going. SSL encryption of HTTP will never be free, exactly, but it's certainly a lot faster than it used to be, and getting faster every year.
Bolting on encryption for logged-in users is by no means an easy thing to accomplish, particularly on large, established websites. You won't see me out there berating every public website for not offering encrypted connections yesterday because I know how much work it takes, and how much additional complexity it can add to an already busy team. Even though HTTPS is way easier now than it was even a few years ago, there are still plenty of tough gotchas: proxy caching, for example, becomes vastly harder when the proxies can no longer "see" what the encrypted traffic they are proxying is doing. Most sites these days are a broad mashup of content from different sources, and technically all of them need to be on HTTPS for a properly encrypted connection. Relatively underpowered and weakly connected mobile devices will pay a much steeper penalty, too.
Maybe not tomorrow, maybe not next year, but over the medium to long term, adopting encrypted web connections as a standard for logged-in users is the healthiest direction for the future of the web. We need to work toward making HTTPS easier, faster, and most of all, the default for logged in users.

dimanche 12 février 2012

IPv6 Quick Facts


IPv6 Quick Facts
IPv4 Address Space:
Slightly over 4,000,000 IP Addresses Available
IPv6 Address Space:
340,282,366,920,938,463,463,374,607,431,768,211.456 IP addresses available
(340 undecillion or 3.4×1038)
But, IPv6 is more than just larger address space:
— Security (IPsec) incorporated
— Designed with QoS in mind
— Has awareness of mobility
— Restores the end-end Internet communications model