Language Selection

English French German Italian Portuguese Spanish

Calls to end US domination of the internet

Filed under
Web

WHENEVER you surf the web, send emails or download music, an unseen force is at work in the background, making sure you connect to the sites, inboxes and databases you want. The name of this brooding presence? The US government.

Some 35 years after the US military invented the internet, the US Department of Commerce retains overall control of the master computers that direct traffic to and from every web and email address on the planet.

But a group convened by the UN last week to thrash out the future of the net is calling for an end to US domination of the net, proposing that instead a multinational forum of governments, companies and civilian organisations is created to run it.

The UN's Working Group on Internet Governance (WGIG) says US control hinders many developments that might improve it. These range from efforts to give the developing world more affordable net access to coming up with globally agreed and enforceable measures to boost net privacy and fight cybercrime.

US control also means that any changes to the way the net works, including the addition of new domain names such as .mobi for cellphone-accessed sites, have to be agreed by the US, whatever experts in the rest of the world think. The flipside is that the US could make changes without the agreement of the rest of the world.

In a report issued in Geneva in Switzerland on 14 July, the WGIG seeks to overcome US hegemony. "The internet should be run multilaterally, transparently and democratically. And it must involve all stakeholders," says Markus Kummer, a Swiss diplomat who is executive coordinator of the WGIG.

So why is the internet's overarching technology run by the US? The reason is that the net was developed there in the late 1960s by the Pentagon's Advanced Research Projects Agency (ARPA) in a bid to create a communications medium that would still work if a Soviet nuclear strike took out whole chunks of the network. This medium would send data from node to node in self-addressed "packets" that could take any route they liked around the network, avoiding any damaged parts.

Today the internet has 13 vast computers dotted around the world that translate text-based email and web addresses into numerical internet protocol (IP) node addresses that computers understand. In effect a massive look-up table, the 13 computers are collectively known as the Domain Name System (DNS). But the DNS master computer, called the master root server, is based in the US and is ultimately controlled by the Department of Commerce. Because the data it contains is propagated to all the other DNS servers around the world, access to the master root server file is a political hot potato.

Currently, only the US can make changes to that master file. And that has some WGIG members very worried indeed.

Full Article.

two words

Fuck that.

If they think the US will hand over control after inventing the 'Net in the first place, they're even dumber then most of the politicians in America.

Let them build their own damn Internet.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

More in Tux Machines

Leftovers: OSS

OSS in the Back End

  • Open Source NFV Part Four: Open Source MANO
    Defined in ETSI ISG NFV architecture, MANO (Management and Network Orchestration) is a layer — a combination of multiple functional entities — that manages and orchestrates the cloud infrastructure, resources and services. It is comprised of, mainly, three different entities — NFV Orchestrator, VNF Manager and Virtual Infrastructure Manager (VIM). The figure below highlights the MANO part of the ETSI NFV architecture.
  • After the hype: Where containers make sense for IT organizations
    Container software and its related technologies are on fire, winning the hearts and minds of thousands of developers and catching the attention of hundreds of enterprises, as evidenced by the huge number of attendees at this week’s DockerCon 2016 event. The big tech companies are going all in. Google, IBM, Microsoft and many others were out in full force at DockerCon, scrambling to demonstrate how they’re investing in and supporting containers. Recent surveys indicate that container adoption is surging, with legions of users reporting they’re ready to take the next step and move from testing to production. Such is the popularity of containers that SiliconANGLE founder and theCUBE host John Furrier was prompted to proclaim that, thanks to containers, “DevOps is now mainstream.” That will change the game for those who invest in containers while causing “a world of hurt” for those who have yet to adapt, Furrier said.
  • Is Apstra SDN? Same idea, different angle
    The company’s product, called Apstra Operating System (AOS), takes policies based on the enterprise’s intent and automatically translates them into settings on network devices from multiple vendors. When the IT department wants to add a new component to the data center, AOS is designed to figure out what needed changes would flow from that addition and carry them out. The distributed OS is vendor-agnostic. It will work with devices from Cisco Systems, Hewlett Packard Enterprise, Juniper Networks, Cumulus Networks, the Open Compute Project and others.
  • MapR Launches New Partner Program for Open Source Data Analytics
    Converged data vendor MapR has launched a new global partner program for resellers and distributors to leverage the company's integrated data storage, processing and analytics platform.
  • A Seamless Monitoring System for Apache Mesos Clusters
  • All Marathons Need a Runner. Introducing Pheidippides
    Activision Publishing, a computer games publisher, uses a Mesos-based platform to manage vast quantities of data collected from players to automate much of the gameplay behavior. To address a critical configuration management problem, James Humphrey and John Dennison built a rather elegant solution that puts all configurations in a single place, and named it Pheidippides.
  • New Tools and Techniques for Managing and Monitoring Mesos
    The platform includes a large number of tools including Logstash, Elasticsearch, InfluxDB, and Kibana.
  • BlueData Can Run Hadoop on AWS, Leave Data on Premises
    We've been watching the Big Data space pick up momentum this year, and Big Data as a Service is one of the most interesting new branches of this trend to follow. In a new development in this space, BlueData, provider of a leading Big-Data-as-a-Service software platform, has announced that the enterprise edition of its BlueData EPIC software will run on Amazon Web Services (AWS) and other public clouds. Essentially, users can now run their cloud and computing applications and services in an Amazon Web Services (AWS) instance while keeping data on-premises, which is required for some companies in the European Union.

today's howtos

Industrial SBC builds on Raspberry Pi Compute Module

On Kickstarter, a “MyPi” industrial SBC using the RPi Compute Module offers a mini-PCIe slot, serial port, wide-range power, and modular expansion. You might wonder why in 2016 someone would introduce a sandwich-style single board computer built around the aging, ARM11 based COM version of the original Raspberry Pi, the Raspberry Pi Compute Module. First off, there are still plenty of industrial applications that don’t need much CPU horsepower, and second, the Compute Module is still the only COM based on Raspberry Pi hardware, although the cheaper, somewhat COM-like Raspberry Pi Zero, which has the same 700MHz processor, comes close. Read more