A personification of innovation as represented by a statue in The American Adventure in the World Showcase pavilion of Walt Disney World's Epcot.
The term innovation means a new way of doing something. It may refer to incremental, radical, and revolutionary changes in thinking, products, processes, or organizations. A distinction is typically made between Invention, an idea made manifest, and innovation, ideas applied successfully. In many fields, something new must be substantially different to be innovative, not an insignificant change, e.g., in the arts, economics, business and government policy. In economics the change must increase value, customer value, or producer value. The goal of innovation is positive change, to make someone or something better. Innovation leading to increased productivity is the fundamental source of increasing wealth in an economy.
Innovation is an important topic in the study of economics, business, technology, sociology, and engineering. Colloquially, the word "innovation" is often used as synonymous with the output of the process. However, economists tend to focus on the process itself, from the origination of an idea to its transformation into something useful, to its implementation; and on the system within which the process of innovation unfolds. Since innovation is also considered a major driver of the economy, the factors that lead to innovation are also considered to be critical to policy makers.
Those who are directly responsible for application of the innovation are often called pioneers in their field, whether they are individuals or organisations.
Introduction
In the organizational context, innovation may be linked to performance and growth through improvements in efficiency, productivity, quality, competitive positioning, market share, etc. All organizations can innovate, including for example hospitals, universities, and local governments.
While innovation typically adds value, innovation may also have a negative or destructive effect as new developments clear away or change old organizational forms and practices. Organizations that do not innovate effectively may be destroyed by those that do. Hence innovation typically involves risk. A key challenge in innovation is maintaining a balance between process and product innovations where process innovations tend to involve a business model which may develop shareholder satisfaction through improved efficiencies while product innovations develop customer support however at the risk of costly R&D that can erode shareholder return.
Conceptualizing innovation
Innovation has been studied in a variety of contexts, including in relation to technology, commerce, social systems, economic development, and policy construction. There are, therefore, naturally a wide range of approaches to conceptualizing innovation in the scholarly literature. See, e.g., Fagerberg et al. (2004).
Fortunately, however, a consistent theme may be identified: innovation is typically understood as the successful introduction of something new and useful, for example introducing new methods, techniques, or practices or new or altered products and services.
Distinguishing from Invention and other concepts
"An important distinction is normally made between invention and innovation. Invention is the first occurrence of an idea for a new product or process, while innovation is the first attempt to carry it out into practice" (Fagerberg, 2004: 4)
It is useful, when conceptualizing innovation, to consider whether other words suffice. Invention – the creation of new forms, compositions of matter, or processes – is often confused with innovation. An improvement on an existing form, composition or processes might be an invention, an innovation, both or neither if it is not substantial enough. It can be difficult to differentiate change from innovation. According to business literature, an idea, a change or an improvement is only an innovation when it is put to use and effectively causes a social or commercial reorganization.
Innovation occurs when someone uses an invention or an idea to change how the world works, how people organize themselves, or how they conduct their lives. In this view innovation occurs whether or not the act of innovating succeeds in generating value for its champions. Innovation is distinct from improvement in that it permeates society and can cause reorganization. It is distinct from problem solving and may cause problems. Thus, in this view, innovation occurs whether it has positive or negative results.
Innovation in organizations
A convenient definition of innovation from an organizational perspective is given by Luecke and Katz (2003), who wrote:
"Innovation . . . is generally understood as the successful introduction of a new thing or method . . . Innovation is the embodiment, combination, or synthesis of knowledge in original, relevant, valued new products, processes, or services.
Innovation typically involves creativity, but is not identical to it: innovation involves acting on the creative ideas to make some specific and tangible difference in the domain in which the innovation occurs. For example, Amabile et al (1996) propose:
"All innovation begins with creative ideas . . . We define innovation as the successful implementation of creative ideas within an organization. In this view, creativity by individuals and teams is a starting point for innovation; the first is necessary but not sufficient condition for the second".
For innovation to occur, something more than the generation of a creative idea or insight is required: the insight must be put into action to make a genuine difference, resulting for example in new or altered business processes within the organization, or changes in the products and services provided.
A further characterization of innovation is as an organizational or management process. For example, Davila et al (2006), write:
"Innovation, like many business functions, is a management process that requires specific tools, rules, and discipline."
From this point of view the emphasis is moved from the introduction of specific novel and useful ideas to the general organizational processes and procedures for generating, considering, and acting on such insights leading to significant organizational improvements in terms of improved or new business products, services, or internal processes.
It should be noted, however, that the term 'innovation' is used by many authors rather interchangeably with the term 'creativity' when discussing individual and organizational creative activity. As Davila et al (2006) comment,
"Often, in common parlance, the words creativity and innovation are used interchangeably. They shouldn't be, because while creativity implies coming up with ideas, it's the "bringing ideas to life" . . . that makes innovation the distinct undertaking it is."
The distinctions between creativity and innovation discussed above are by no means fixed or universal in the innovation literature. They are however observed by a considerable number of scholars in innovation studies.
Innovation as a behavior
Some in depth work on innovation in organisations, teams and individuals has been carried out by J. L. Byrd , PhD who is co-author of "The Innovation Equation." Dr Jacqueline Byrd is the brain behind the Creatrix Inventory which can be used to look at innovation and what is behind it. The Innovation Equation she developed is:
Innovation = Creativity * Risk Taking
Using this inventory it is possible to plot on axis where individuals fit on their Risk Taking and Creativity.
Economic conceptions of innovation
Joseph Schumpeter defined economic innovation in The Theory of Economic Development, 1934, Harvard University Press, Boston.
1. The introduction of a new good — that is one with which consumers are not yet familiar — or of a new quality of a good.
2. The introduction of a new method of production, which need by no means be founded upon a discovery scientifically new, and can also exist in a new way of handling a commodity commercially.
3. The opening of a new market, that is a market into which the particular branch of manufacture of the country in question has not previously entered, whether or not this market has existed before.
4. The conquest of a new source of supply of raw materials or half-manufactured goods, again irrespective of whether this source already exists or whether it has first to be created.
5. The carrying out of the new organization of any industry, like the creation of a monopoly position (for example through trustification) or the breaking up of a monopoly position
Schumpeter's focus on innovation is reflected in Neo-Schumpeterian economics, developed by such scholars as Christopher Freeman and Giovanni Dosi.
In the 1980s, Veneris (1984, 1990) developed a systems dynamics computer simulation model which takes into account business cycles and innovations.
Innovation is also studied by economists in a variety of contexts, for example in theories of entrepreneurship or in Paul Romer's New Growth Theory.
Transaction cost and network theory perspectives
According to Regis Cabral (1998, 2003):
"Innovation is a new element introduced in the network which changes, even if momentarily, the costs of transactions between at least two actors, elements or nodes, in the network."
Innovation and market outcome
Market outcome from innovation can be studied from different lenses. The industrial organizational approach of market characterization according to the degree of competitive pressure and the consequent modelling of firm behavior often using sophisticated game theoretic tools, while permitting mathematical modelling, has shifted the ground away from an intuitive understanding of markets. The earlier visual framework in economics, of market demand and supply along price and quantity dimensions, has given way to powerful mathematical models which though intellectually satisfying has led policy makers and managers groping for more intuitive and less theoretical analyses to which they can relate to at a practical level. Non quantifiable variables find little place in these models, and when they do, mathematical gymnastics (such as the use of different demand elasticities for differentiated products) embrace many of these qualitative variables, but in an intuitively unsatisfactory way.
In the management (strategy) literature on the other hand, there is a vast array of relatively simple and intuitive models for both managers and consultants to choose from. Most of these models provide insights to the manager which help in crafting a strategic plan consistent with the desired aims. Indeed most strategy models are generally simple, wherein lie their virtue. In the process however, these models often fail to offer insights into situations beyond that for which they are designed, often due to the adoption of frameworks seldom analytical, seldom rigorous. The situational analyses of these models often tend to be descriptive and seldom robust and rarely present behavioral relationship between variables under study.
From an academic point of view, there is often a divorce between industrial organisation theory and strategic management models. While many economists view management models as being too simplistic, strategic management consultants perceive academic economists as being too theoretical, and the analytical tools that they devise as too complex for managers to understand.
Innovation literature while rich in typologies and descriptions of innovation dynamics is mostly technology focused. Most research on innovation has been devoted to the process (technological) of innovation, or has otherwise taken a how to (innovate) approach. The integrated innovation model of Soumodip Sarkar goes some way to providing the academic, the manager and the consultant an intuitive understanding of the innovation – market linkages in a simple yet rigorous framework in his book, Innovation, Market Archetypes and Outcome- An Integrated Framework.
The integrated model presents a new framework for understanding firm and market dynamics, as it relates to innovation. The model is enriched by the different strands of literature – industrial organization, management and innovation. The integrated approach that allows the academic, the management consultant and the manager alike to understand where a product (or a single product firm) is located in an integrated innovation space, why it is so located and which then provides valuable clues as to what to do while designing strategy. The integration of the important determinant variables in one visual framework with a robust and an internally consistent theoretical basis is an important step towards devising comprehensive firm strategy. The integrated framework provides vital clues towards framing a what to guide for managers and consultants. Furthermore, the model permits metrics and consequently diagnostics of both the firm and the sector and this set of assessment tools provide a valuable guide for devising strategy.
Sources of innovation
There are several sources of innovation. In the linear model the traditionally recognized source is manufacturer innovation. This is where an agent (person or business) innovates in order to sell the innovation. Another source of innovation, only now becoming widely recognized, is end-user innovation. This is where an agent (person or company) develops an innovation for their own (personal or in-house) use because existing products do not meet their needs. Eric von Hippel has identified end-user innovation as, by far, the most important and critical in his classic book on the subject, Sources of Innovation.
Innovation by businesses is achieved in many ways, with much attention now given to formal research and development for "breakthrough innovations." But innovations may be developed by less formal on-the-job modifications of practice, through exchange and combination of professional experience and by many other routes. The more radical and revolutionary innovations tend to emerge from R&D, while more incremental innovations may emerge from practice – but there are many exceptions to each of these trends.
Regarding user innovation, rarely user innovators may become entrepreneurs, selling their product, or more often they may choose to trade their innovation in exchange for other innovations. Nowadays, they may also choose to freely reveal their innovations, using methods like open source. In such networks of innovation the creativity of the users or communities of users can further develop technologies and their use.
Whether innovation is mainly supply-pushed (based on new technological possibilities) or demand-led (based on social needs and market requirements) has been a hotly debated topic. Similarly, what exactly drives innovation in organizations and economies remains an open question.
More recent theoretical work moves beyond this simple dualistic problem, and through empirical work shows that innovation does not just happen within the industrial supply-side, or as a result of the articulation of user demand, but through a complex set of processes that links many different players together – not only developers and users, but a wide variety of intermediary organisations such as consultancies, standards bodies etc. Work on social networks suggests that much of the most successful innovation occurs at the boundaries of organisations and industries where the problems and needs of users, and the potential of technologies can be linked together in a creative process that challenges both.
Value of experimentation in innovation
When an innovative idea requires a new business model, or radically redesigns the delivery of value to focus on the customer, a real world experimentation approach increases the chances of market success. New business models and customer experiences can’t be tested through traditional market research methods. Pilot programs for new innovations set the path in stone too early thus increasing the costs of failure.
Stefan Thomke of Harvard Business School has written a definitive book on the importance of experimentation. Experimentation Matters argues that every company’s ability to innovate depends on a series of experiments [successful or not], that help create new products and services or improve old ones. That period between the earliest point in the design cycle and the final release should be filled with experimentation, failure, analysis, and yet another round of experimentation. “Lather, rinse, repeat,” Thomke says. Unfortunately, uncertainty often causes the most able innovators to bypass the experimental stage.
In his book, Thomke outlines six principles companies can follow to unlock their innovative potential.
1. Anticipate and Exploit Early Information Through ‘Front-Loaded’ Innovation Processes
2. Experiment Frequently but Do Not Overload Your Organization.
3. Integrate New and Traditional Technologies to Unlock Performance.
4. Organize for Rapid Experimentation.
5. Fail Early and Often but Avoid ‘Mistakes’.
6. Manage Projects as Experiments.
Thomke further explores what would happen if the principles outlined above were used beyond the confines of the individual organization. For instance, in the state of Rhode Island, innovators are collaboratively leveraging the state's compact geography, economic and demographic diversity and close-knit networks to quickly and cost-effectively test new business models through a real-world experimentation lab.
Diffusion of innovations
Once innovation occurs, innovations may be spread from the innovator to other individuals and groups. This process has been studied extensively in the scholarly literature from a variety of viewpoints, most notably in Everett Rogers' classic book, The Diffusion of Innovations. However, this 'linear model' of innovation has been substantinally challenged by scholars in the last 20 years, and much research has shown that the simple invention-innovation-diffusion model does not do justice to the multilevel, non-linear processes that firms, entrepreneurs and users participate in to create successful and sustainable innovations.
Rogers proposed that the life cycle of innovations can be described using the ‘s-curve’ or diffusion curve. The s-curve maps growth of revenue or productivity against time. In the early stage of a particular innovation, growth is relatively slow as the new product establishes itself. At some point customers begin to demand and the product growth increases more rapidly. New incremental innovations or changes to the product allow growth to continue. Towards the end of its life cycle growth slows and may even begin to decline. In the later stages, no amount of new investment in that product will yield a normal rate of return.
The s-curve is derived from half of a normal distribution curve. There is an assumption that new products are likely to have "product Life". i.e. a start-up phase, a rapid increase in revenue and eventual decline. In fact the great majority of innovations never get off the bottom of the curve, and never produce normal returns.
Innovative companies will typically be working on new innovations that will eventually replace older ones. Successive s-curves will come along to replace older ones and continue to drive growth upwards. In the figure above the first curve shows a current technology. The second shows an emerging technology that current yields lower growth but will eventually overtake current technology and lead to even greater levels of growth. The length of life will depend on many factors.
Goals of innovation
Programs of organizational innovation are typically tightly linked to organizational goals and objectives, to the business plan, and to market competitive positioning.
For example, one driver for innovation programs in corporations is to achieve growth objectives. As Davila et al (2006) note,
"Companies cannot grow through cost reduction and reengineering alone . . . Innovation is the key element in providing aggressive top-line growth, and for increasing bottom-line results"
In general, business organisations spend a significant amount of their turnover on innovation i.e. making changes to their established products, processes and services. The amount of investment can vary from as low as a half a percent of turnover for organisations with a low rate of change to anything over twenty percent of turnover for organisations with a high rate of change.
The average investment across all types of organizations is four percent. For an organisation with a turnover of say one billion currency units, this represents an investment of forty million units. This budget will typically be spread across various functions including marketing, product design, information systems, manufacturing systems and quality assurance.
The investment may vary by industry and by market positioning.
One survey across a large number of manufacturing and services organisations found, ranked in decreasing order of popularity, that systematic programs of organizational innovation are most frequently driven by:
1. Improved quality
2. Creation of new markets
3. Extension of the product range
4. Reduced labour costs
5. Improved production processes
6. Reduced materials
7. Reduced environmental damage
8. Replacement of products/services
9. Reduced energy consumption
10. Conformance to regulations
These goals vary between improvements to products, processes and services and dispel a popular myth that innovation deals mainly with new product development. Most of the goals could apply to any organisation be it a manufacturing facility, marketing firm, hospital or local government.
Failure of innovation
Research findings vary, ranging from fifty to ninety percent of innovation projects judged to have made little or no contribution to organizational goals. One survey regarding product innovation quotes that out of three thousand ideas for new products, only one becomes a success in the marketplace.[citation needed] Failure is an inevitable part of the innovation process, and most successful organisations factor in an appropriate level of risk. Perhaps it is because all organisations experience failure that many choose not to monitor the level of failure very closely. The impact of failure goes beyond the simple loss of investment. Failure can also lead to loss of morale among employees, an increase in cynicism and even higher resistance to change in the future.
Innovations that fail are often potentially ‘good’ ideas but have been rejected or ‘shelved’ due to budgetary constraints, lack of skills or poor fit with current goals. Failures should be identified and screened out as early in the process as possible. Early screening avoids unsuitable ideas devouring scarce resources that are needed to progress more beneficial ones. Organizations can learn how to avoid failure when it is openly discussed and debated. The lessons learned from failure often reside longer in the organisational consciousness than lessons learned from success. While learning is important, high failure rates throughout the innovation process are wasteful and a threat to the organisation's future.
The causes of failure have been widely researched and can vary considerably. Some causes will be external to the organisation and outside its influence of control. Others will be internal and ultimately within the control of the organisation. Internal causes of failure can be divided into causes associated with the cultural infrastructure and causes associated with the innovation process itself. Failure in the cultural infrastructure varies between organisations but the following are common across all organisations at some stage in their life cycle (O'Sullivan, 2002):
1. Poor Leadership
2. Poor Organisation
3. Poor Communication
4. Poor Empowerment
5. Poor Knowledge Management
Common causes of failure within the innovation process in most organisations can be distilled into five types:
1. Poor goal definition
2. Poor alignment of actions to goals
3. Poor participation in teams
4. Poor monitoring of results
5. Poor communication and access to information
Effective goal definition requires that organisations state explicitly what their goals are in terms understandable to everyone involved in the innovation process. This often involves stating goals in a number of ways. Effective alignment of actions to goals should link explicit actions such as ideas and projects to specific goals. It also implies effective management of action portfolios. Participation in teams refers to the behaviour of individuals in and of teams, and each individual should have an explicitly allocated responsibility regarding their role in goals and actions and the payment and rewards systems that link them to goal attainment. Finally, effective monitoring of results requires the monitoring of all goals, actions and teams involved in the innovation process.
Innovation can fail if seen as an organisational process whose success stems from a mechanistic approach i.e. 'pull lever obtain result'. While 'driving' change has an emphasis on control, enforcement and structure it is only a partial truth in achieving innovation. Organisational gatekeepers frame the organisational environment that "Enables" innovation; however innovation is "Enacted" – recognised, developed, applied and adopted – through individuals.
Individuals are the 'atom' of the organisation close to the minutiae of daily activities. Within individuals gritty appreciation of the small detail combines with a sense of desired organisational objectives to deliver (and innovate for) a product/service offer.
From this perspective innovation succeeds from strategic structures that engage the individual to the organisation's benefit. Innovation pivots on intrinsically motivated individuals, within a supportive culture, informed by a broad sense of the future.
Innovation, implies change, and can be counter to an organisation's orthodoxy. Space for fair hearing of innovative ideas is required to balance the potential autoimmune exclusion that quells an infant innovative culture.
[edit] Measures of innovation
There are two fundamentaly different types of measures for innovation: the organisational level and the political level. The measure of innovation at the organisational level relates to individuals, team-level assessments, private companies from the smallest to the largest. Measure of innovation for organisations can be conducted by surveys, workshops, consultants or internal benchmarking. There is today no established general way to measure organisational innovation. Corporate measurements are generally structured around balanced scorecards which cover several aspects of innovation such as business measures related to finances, innovation process efficiency, employees' contribution and motivation, as well benefits for customers. Measured values will vary widely between businesses, covering for example new product revenue, spending in R&D, time to market, customer and employee perception & satisfaction, number of patents, additional sales resulting from past innovations. For the political level, measures of innovation are more focussing on a country or region competitive advantage through innovation. In this context, organizational capabilities can be evaluated through various evaluation frameworks e.g. efqm (European foundation for quality management). The OECD Oslo Manual from 1995 suggests standard guidelines on measuring technological product and process innovation. Some people consider the Oslo Manual complementary to the Frascati Manual from 1963. The new Oslo manual from 2005 takes a wider perspective to innovation, and includes marketing and organizational innovation. Other ways of measuring innovation have traditionally been expenditure, for example, investment in R&D (Research and Development) as percentage of GNP (Gross National Product). Whether this is a good measurement of Innovation has been widely discussed and the Oslo Manual has incorporated some of the critique against earlier methods of measuring. This being said, the traditional methods of measuring still inform many policy decisions. The EU Lisbon Strategy has set as a goal that their average expenditure on R&D should be 3 % of GNP.
The Oslo Manual is focused on North America, Europe, and other rich economies. In 2001 for Latin America and the Caribbean countries it was created the Bogota Manual
Many scholars claim that there is a great bias towards the "science and technology mode" (S&T-mode or STI-mode), while the "learning by doing, using and interacting mode" (DUI-mode) is widely ignored. For an example, that means you can have the better high tech or software, but there are also crucial learning tasks important for innovation. But these measurements and research are rarely done.
Technology transfer is the process of sharing of skills, knowledge, technologies, methods of manufacturing, samples of manufacturing and facilities among industries, universities, governments and other institutions to ensure that scientific and technological developments are accessible to a wider range of users who can then further develop and exploit the technology into new products, processes, applications, materials or services. While conceptually the practice has been utilized for many years (in ancient times, Archimedes was notable for applying science to practical problems), the present-day volume of research, combined with high-profile failures at Xerox PARC and elsewhere, has led to a focus on the process itself.
Transfer process
Many companies, universities and governmental organizations now have an "Office of Technology Transfer" (also known as "Tech Transfer" or "TechXfer") dedicated to identifying research which has potential commercial interest and strategies for how to exploit it. For instance, a research result may be of scientific and commercial interest, but patents are normally only issued for practical processes, and so someone -- not necessarily the researchers -- must come up with a specific practical process. Another consideration is commercial value; for example, while there are many ways to accomplish nuclear fusion, the ones of commercial value are those that generate more energy than they require to operate.
The process to commercially exploit research varies widely. It can involve licensing agreements or setting up joint ventures and partnerships to share both the risks and rewards of bringing new technologies to market. Other corporate vehicles, e.g. spin-outs, are used where the host organization does not have the necessary will, resources or skills to develop a new technology. Often these approaches are associated with raising of venture capital (VC) as a means of funding the development process, a practice more common in the US than in the EU, which has a more conservative approach to VC funding.
In recent years, there has been a marked increase in technology transfer intermediaries specialized in their field. They work on behalf of research institutions, governments and even large multinationals. Where start-ups and spin-outs are the clients, commercial fees are sometimes waived in lieu of an equity stake in the business. As a result of the potential complexity of the technology transfer process, technology transfer organizations are often multidisciplinary, including economists, engineers, lawyers, marketers and scientists. The dynamics of the technology transfer process has attracted attention in its own right, and there are several dedicated societies and journals.
Technological determinism is a reductionist doctrine that a society's technology determines its cultural values, social structure, or history. Rather than the social shaping of technology, "the uses made of technology are largely determined by the structure of the technology itself, that is, that its functions follow from its form" (Neil Postman). However, this is not to be confused with the inevitability thesis (Chandler), which states that once a technology is introduced into a culture that what follows is the inevitable development of that technology.
Technological determinism has been defined as an approach that identifies technology, or technological advances, as the central causal element in processes of social change (Croteau and Hoynes). As a technology is stabilized, its design tends to dictate users' behaviors, consequently diminishing human agency. It ignores the social and cultural circumstances in which the technology was developed. Sociologist Claude Fischer (1992) characterised the most prominent forms of technological determinism as "billiard ball" approaches, in which technology is seen as an external force introduced into a social situation, producing a series of ricochet effects.
Technological determinism has been summarized as 'The belief in technology as a key governing force in society ...' (Merritt Roe Smith). 'The idea that technological development determines social change ...' (Bruce Bimber). It changes the way people think and how they interact with others and can be described as '...a three-word logical proposition: "Technology determines history"' (Rosalind Williams) . It is, '... the belief that social progress is driven by technological innovation, which in turn follows an "inevitable" course.' (Michael L. Smith). This 'idea of progress' or 'doctrine of progress' is centralised around the idea that social problems can be solved by technological advancement, and this is the way that society moves forward. Technological determinists believe that "'You can't stop progress', implying that we are unable to control technology" (Lelia Green). This suggests that we are somewhat powerless and society allows technology to drive social changes because, "societies fail to be aware of the alternatives to the values embedded in it [technology]" (Merritt Roe Smith).
The term is believed to have been coined by Thorstein Veblen (1857-1929), an American sociologist. Most interpretations of technological determinism share two general ideas:
* that the development of technology itself follows a predictable, traceable path largely beyond cultural or political influence, and
* that technology in turn has "effects" on societies that are inherent, rather than socially conditioned or that the society organizes itself in such a way to support and further develop a technology once it has been introduced.
Technological determinism stands in opposition to the theory of the social construction of technology, which holds that both the path of innovation and the consequences of technology for humans are strongly, if not entirely shaped by society itself through the influence of culture, politics, economic arrangements, and the like. In this case of social determinism, “What matters is not the technology itself, but the social or economic system in which it is embedded” (Langdon Winner).
Pessimism towards techno-science arose after the mid 20th century for various reasons including the use of nuclear energy towards nuclear weapons, Nazi human experimentation during World War II , and lack of economic development in the third world (also known as the global south). As a direct consequence, desire for greater control of the course of development of technology gave rise to disenchantment with the model of technological determinism in academia and the creation of the theory of technological constructivism (see social construction of technology).
We look to see why Romance Novels have become so dominant in our society compared to other forms of novels like the Detective or Western novel. We could say that it was because of the invention of the perfect binding system developed by publishers. This was where glue was used instead of the time-consuming and very costly process of binding books by sewing in separate signatures. This meant that these books could be mass-produced for the wider public. We would not be able to have mass literary without mass production. This example is closely related to Marshall McLuhan's belief that print helped produce the nation state. This moved society on from an oral culture to a literate culture but also introduced a capitalist society where there was clear class distinction and individualism. As Postman maintains
“the printing press, the computer, and television are not therefore simply machines which convey information. They are metaphors through which we conceptualize reality in one way or another. They will classify the world for us, sequence it, frame it, enlarge it, reduce it, argue a case for what it is like. Through these media metaphors, we do not see the world as it is. We see it as our coding systems are. Such is the power of the form of information”.
Hard and soft
In examining determinism we should also touch upon Aslam Mamu and the idea of Hard determinism and Soft Determinism. A compatibilist says that it is possible for free will and determinism to exist in the world together while a incompatibilist would say that they can not and there must be one or the other. Those who support determinism can be further divided.
Hard determinists would view technology as developing independent from social concerns. They would say that technology creates a set of powerful forces acting to regulate our social activity and its meaning. According to this view of determinism we organize ourselves to meet the needs of technology and the outcome of this organization is beyond our control or we do not have the freedom to make a choice regarding the outcome.
Soft Determinism, as the name suggests, is a more passive view of the way technology interacts with socio-political situations. Soft determinists still subscribe to the fact that technology is the guiding force in our evolution, but would maintain that we have a chance to make decisions regarding the outcomes of a situation. This is not to say that free will exists but it is the possibility for us to roll the dice and see what the outcome is. A slightly different variant of soft determinism is the 1922 technology-driven theory of social change proposed by William Fielding Ogburn, in which society must adjust to the consequences of major inventions, but often does so only after a period of cultural lag.
Technology as neutral
Individuals who consider technology as neutral see technology as neither good nor bad and what matters are the ways in which we use technology. An example of a neutral viewpoint is, ‘guns are neutral and its up to how we use them’ weather it would be ‘good or bad’ (Green,2001). Mackenzie and Wajcman (1997) believe that technology is only neutral if its never been used before, or if no one knows what its is going to be used for (Green,2001). In effect, guns would only be classified as neutral if society were none the wiser of their existence and functionalities (Green,2001). Obviously, a society like this is non existent and once becoming knowledgeable about technology, its drawn in a social progression and nothing whatsoever is ‘neutral about society’ (Green). According to Leila Green, if one believes technology is neutral, one would disregard the cultural and social conditions that technology was produced (Green, 2001). To determine whether or not technology may be neutral depends on the individual and the beliefs they hold.
Criticism
Modern thinkers no longer consider technological determinism to be a very accurate view of the way in which we interact with technology. “The relationship between technology and society cannot be reduced to a simplistic cause-and-affect formula. It is, rather, an ‘intertwining’”, whereby technology does not determine but "...operates, and are operated upon in a complex social field" (Murphie and Potts).
In his article "Subversive Rationalization: Technology, Power and Democracy with technology." Andrew Feenberg argues that technological determinism is not a very well founded concept by illustrating that two of the founding theses of determinism are easily questionable and in doing so calls for what he calls democratic rationalization (Feenberg 210-212).
In his article “Do Artifacts Have Politics?,” Langdon Winner transcends hard and soft technological determinism by elaborating two ways in which artifacts can have politics.
Although "The deterministic model of technology is widely propagated in society" (Sarah Miller), it has also been widely questioned by scholars. Lelia Green explains that, "When technology was perceived as being outside society, it made sense to talk about technology as neutral". Yet, this idea fails to take into account that culture is not fixed and society is dynamic. When "Technology is implicated in social processes, there is nothing neutral about society" (Lelia Green). This confirms one of the major problems with "technological determinism and the resulting denial of human responsibility for change. There is a loss of human involvement that shape technology and society" (Sarah Miller).
Another conflicting idea is that of technological somnambulism a term coined by Winner in his essay "technology as forms of life". Winner wonders whether or not we are simply sleepwalking through our existence with little concern or knowledge as to how we truly interact with technology. In this view it is still possible for us to wake up and once again take control of the direction in which we are traveling (Winner 104). However, it requires society to adopt Ralph Schroeder's claim that, "users don’t just passively consume technology, but actively transform it”.
In opposition to technological determinism are those who suscribe to the belief of social determinism and postmodernism. Where social determinists believe that social circumstance determine which technologies are adopted and that no technology can be considered "inevitable." Technology and culture are not neutral and when knowledge comes into the equation, technology becomes implicated in social processes. The knowledge of how to create and enhance technology, and of how to use technology is socially bound knowledge. Postmodernists take another view, suggesting thst what is right or wrong is dependent on circumstance. They believe technological change can have implications on the past, present and future. While they believe technological change is influenced by changes in government policy, society and culture, they consider the notion of change to be a paradox, since change is constant.
Technological evolution is the name of a science and technology studies theory describing technology development, developed by Czech philosopher Radovan Richta.
Theory of technological evolution
According to Richta and later Bloomfield , technology (which Richta defines as "a material entity created by the application of mental and physical effort to nature in order to achieve some value") evolves in three stages: tools, machine, automation. This evolution, he says, follows two trends: the replacement of physical labour with more efficient mental labour, and the resulting greater degree of control over one's natural environment, including an ability to transform raw materials into ever more complex and pliable products.
Stages of technological development
The pretechnological period, in which all other animal species remain today aside from some avian and primate species was a non-rational period of the early prehistoric man.
The emergence of technology, made possible by the development of the rational faculty, paved the way for the first stage: the tool. A tool provides a mechanical advantage in accomplishing a physical task, and must be powered by human or animal effort.
Hunter-gatherers developed tools mainly for procuring food. Tools such as a container, spear, arrow, plow, or hammer that augments physical labor to more efficiently achieve his objective. Later animal-powered tools such as the plow and the horse, increased the productivity of food production about tenfold over the technology of the hunter-gatherers. Tools allow one to do things impossible to accomplish with one's body alone, such as seeing minute visual detail with a microscope, manipulating heavy objects with a pulley and cart, or carrying volumes of water in a bucket.
The second technological stage was the creation of the machine. A machine (a powered machine to be more precise) is a tool that substitutes the element of human physical effort, and requires the operator only to control its function. Machines became widespread with the industrial revolution, though windmills, a type of machine, are much older.
Examples of this include cars, trains, computers, and lights. Machines allow humans to tremendously exceed the limitations of their bodies. Putting a machine on the farm, a tractor, increased food productivity at least tenfold over the technology of the plow and the horse.
The third, and final stage of technological evolution is the automaton. The automaton is a machine that removes the element of human control with an automatic algorithm. Examples of machines that exhibit this characteristic are digital watches, automatic telephone switches, pacemakers, and computer programs.
It's important to understand that the three stages outline the introduction of the fundamental types of technology, and so all three continue to be widely used today. A spear, a plow, a pen, and an optical microscope are all examples of tools.
Theoretical implications
The process of technological evolution culminates with the ability to achieve all the material values technologically possible and desirable by mental effort.
An economic implication of the above idea is that intellectual labour will become increasingly more important relative to physical labour. Contracts and agreements around information will become increasingly more common at the marketplace. Expansion and creation of new kinds of institutes that works with information such as for example universities, book stores, patent-trading companies, etc. is considered an indication that a civilization is in technological evolution.
Interestingly, this highlights the importance underlining the debate over intellectual property in conjunction with decentralized distribution systems such as today's internet. Where the price of information distribution is going towards zero with ever more efficient tools to distribute information is being invented. Growing amounts of information being distributed to an increasingly larger customer base as times goes by. With growing disintermediation in said markets and growing concerns over the protection of intellectual property rights it is not clear what form markets for information will take with the evolution of the information age.
The Strategy of Technology doctrine involves a country using its advantage in technology to create and deploy weapons of sufficient power and numbers so as to overawe or beggar its opponents, forcing them to spend their limited resources on developing hi-tech countermeasures and straining their economy.
The Strategy of Technology is described in the eponymous book written by Stefan T. Possony and Jerry Pournelle in 1968. This was required reading in the U.S. service academies during the latter half of the Cold War.
Cold War
The classic example of the successful deployment of this strategy was the nuclear build-up between the U.S. and U.S.S.R. during the Cold War.
Some observers believe that the Vietnam War was a necessary attritive component to this war — Soviet industrial capacity was diverted to conventional arms in North Vietnam, rather than development of new weapons and nuclear weapons — but evidence would need to be found that the then-current administration of the US saw it thus. Current consensus and evidence holds that it was but a failed defensive move in the Cold War, in the context of the Domino Doctrine.
The coup-de-grace is considered to have been Ronald Reagan's Strategic Defense Initiative, a clear attempt to obsolesce the Soviet nuclear arsenal, creating an immense expense for the Soviets to maintain parity.
Opposing views and controversies
This article does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unverifiable material may be challenged and removed. (February 2007)
It is argued that the strategy was not a great success in the Cold War; that the Soviet Union did little to try to keep up with the SDI system, and that the War in Afghanistan caused a far greater drain on Soviet resources. However, the Soviets spent a colossal amount of money on their Buran space shuttle in an attempt to compete with a perceived military threat from the American Space Shuttle program, which was to be used in the SDI.
There is a further consideration. It is not seriously in doubt that despite the excellent education and training of Soviet technologists and scientists, it was the nations of Europe and North America, in particular the United States, which made most of the running in technical development.
The Soviet Union did have some extraordinary technical breakthroughs of their own. For example: the 15% efficiency advantage of Soviet rocket engines which used exhaust gases to power the fuel pumps[citation needed], or of the Squall supersonic cavitation torpedo. It was also able to use both its superlative espionage arm and the inherent ability of central planning to concentrate resources to great effect.
But the United States found a way to use its opponent's strengths for its own purposes. In the late 1990s, it emerged that many stolen technological secrets were funnelled by an arm of American intelligence to the Soviet Union. The documents were real. They were of versions of the product which contained a critical but not obvious flaw.
Such was the complexity and depth of the stolen secrets that to check them would have required an effort almost as great as developing a similar product from scratch. Such an effort was possible in nations of the West because the cost could be defrayed by commercial sales. In Soviet states this was not an option. This sort of technological jiu-jitsu may set the pattern of future engagements.
"Superpowers" redirects here. For other uses, see Superpower (disambiguation).
The USA and USSR were the two superpowers during the Cold War. Here Ronald Reagan and Mikhail Gorbachev meet in 1985. Since 1989 the USA has remained the sole superpower.
A superpower is a state with a leading position in the international system and the ability to influence events and its own interests and project power on a worldwide scale to protect those interests; it is traditionally considered to be one step higher than a great power. Alice Lyman Miller (Professor of National Security Affairs at the Naval Postgraduate School), defines a superpower as "a country that has the capacity to project dominating power and influence anywhere in the world, and sometimes, in more than one region of the globe at a time, and so may plausibly attain the status of global hegemon." It was a term first applied in 1944 to the United States, the Soviet Union, and the British Empire. Following World War II, as the British Empire transformed itself into the Commonwealth and its territories became independent, the Soviet Union and the United States generally came to be regarded as the only two superpowers, and confronted each other in the Cold War.
After the Cold War, the most common belief held that only the United States fulfilled the criteria to be considered a superpower, although it is a matter of debate whether it is a hegemon or if it is losing its superpower status . China, the European Union, India and Russia are also thought to have the potential of achieving superpower status within the 21st century. Others doubt the existence of superpowers in the post Cold War era altogether, stating that today's complex global marketplace and the rising interdependency between the world's nations has made the concept of a superpower an idea of the past and that the world is now multipolar.
Application of the term
This section is missing citations or needs footnotes.
Using inline citations helps guard against copyright violations and factual inaccuracies. (September 2008)
The term superpower was used to describe nations with greater than great power status as early as 1944, but only gained its specific meaning with regard to the United States and the Soviet Union after World War II.
There have been attempts to apply the term superpower retrospectively, and sometimes very loosely, to a variety of past entities such as Ancient Egypt, Ancient China, Ancient Greece, the Persian Empire, the Roman Empire, the Mongol Empire, Portuguese Empire, the Spanish Empire, the Dutch Republic and the British Empire. Recognition by historians of these older states as superpowers may focus on various superlative traits exhibited by them. For example, at its peak the British Empire was the largest the world had ever seen.
Origin
A world map of 1945. According to William T.R. Fox, the United States (blue), the Soviet Union (red), and the British Empire (teal)/British Commonwealth (light green) were superpowers.
The term in its current political meaning was coined in the book The Superpowers: The United States, Britain and the Soviet Union – Their Responsibility for Peace (1944), written by William T.R. Fox, an American foreign policy professor. The book spoke of the global reach of a super-empowered nation. Fox used the word Superpower to identify a new category of power able to occupy the highest status in a world in which, as the war then raging demonstrated, states could challenge and fight each other on a global scale. According to him, there were (at that moment) three states that were superpowers: Britain, the United States, and the Soviet Union. The British Empire was the most extensive empire in world history, which was considered the foremost great power and by 1921, held sway over 25% of the world's population and controlled about 25% of the Earth's total land area, while the United States and the Soviet Union grew in power in World War II.
Characteristics
Military assets such as a US Navy Nimitz class aircraft carrier combined with a blue water navy are a means of power projection on a global scale—one hallmark of a superpower.
The New York Stock Exchange. Economic power such as a large nominal GDP and a world reserve currency are an important factors in projection of soft power.
The criteria of a superpower are not clearly defined and as a consequence they may differ between sources.
According to Lyman Miller, "The basic components of superpower stature may be measured along four axes of power: military, economic, political, and cultural (or what political scientist Joseph Nye has termed “soft”). 
In the opinion of Kim Richard Nossal of McMaster University, "generally this term was used to signify a political community that occupied a continental-sized landmass, had a sizable population (relative at least to other major powers); a superordinate economic capacity, including ample indigenous supplies of food and natural resources; enjoyed a high degree of non-dependence on international intercourse; and, most importantly, had a well-developed nuclear capacity (eventually normally defined as second-strike capability)."
Former Indian National Security Advisor Jyotindra Nath Dixit has also described the characteristics of superpowers. In his view, "first, the state or the nation concerned should have sizable territorial presence in terms of the size of the population. Secondly, such a state should have high levels of domestic cohesion, clear sense of national identity and stable administration based on strong legal and institutional arrangements. Thirdly, the state concerned should be economically well to do and should be endowed with food security and natural resources, particularly energy resources and infrastructural resources in terms of minerals and metals. Such a state should have a strong industrial base backed by productive capacities and technological knowledge. Then the state concerned should have military capacities, particularly nuclear and missile weapons capabilities at least comparable to, if not of higher levels than other countries which may have similar capacities."
In the opinion of Professor Paul Dukes, "a superpower must be able to conduct a global strategy including the possibility of destroying the world; to command vast economic potential and influence; and to present a universal ideology". Although, "many modifications may be made to this basic definition".
According to Professor June Teufel Dreyer, "A superpower must be able to project its power, soft and hard, globally."
Cold War
This map shows two essential global spheres during the Cold War in 1980. Consult the legend on the map for more details.
The 1956 Suez Crisis suggested that Britain, financially weakened by two world wars, could not then pursue its foreign policy objectives on an equal footing with the new superpowers without sacrificing convertibility of its reserve currency as a central goal of policy. As the majority of World War II had been fought far from its national boundaries, the United States had not suffered the industrial destruction or massive civilian casualties that marked the wartime situation of the countries in Europe or Asia. The war had reinforced the position of the United States as the world's largest long-term creditor nation and its principal supplier of goods; moreover it had built up a strong industrial and technological infrastructure that had greatly advanced its military strength into a primary position on the global stage.
Despite attempts to create multinational coalitions or legislative bodies (such as the United Nations), it became increasingly clear that the superpowers had very different visions about what the post-war world ought to look like, and after the withdrawal of British aid to Greece in 1947 the United States took the lead in containing Soviet expansion in the Cold War. The two countries opposed each other ideologically, politically, militarily, and economically. The Soviet Union promoted the ideology of communism, whilst the United States promoted the ideologies of liberal democracy and the free market. This was reflected in the Warsaw Pact and NATO military alliances, respectively, as most of Europe became aligned either with the United States or the Soviet Union. These alliances implied that these two nations were part of an emerging bipolar world, in contrast with a previously multipolar world.
The idea that the Cold War period revolved around only two blocs, or even only two nations, has been challenged by some scholars in the post-Cold War era, who have noted that the bipolar world only exists if one ignores all of the various movements and conflicts that occurred without influence from either of the two superpowers. Additionally, much of the conflict between the superpowers was fought in "proxy wars", which more often than not involved issues more complex than the standard Cold War oppositions.
After the Soviet Union disintegrated in the early 1990s, the term hyperpower began to be applied to the United States, as the sole remaining superpower of the Cold War era. This term, coined by French foreign minister Hubert Védrine in the 1990s, is controversial and the validity of classifying the United States in this way is disputed. One notable opponent to this theory, Samuel P. Huntington, rejects this theory in favor of a multipolar balance of power.
Other International Relations theorists, such as Henry Kissinger, theorize that because the threat of the Soviet Union no longer exists to formerly American-dominated regions such as Japan and Western Europe, American influence is only declining since the end of the Cold War, because such regions no longer need protection or have necessarily similar foreign policies as the United States.
Post Cold War (1991-Present)
After the dissolution of the Soviet Union in 1991 that ended the Cold War, the post-Cold War world was sometimes considered as a unipolar world , with the United States as the world's sole remaining superpower. In the words of Samuel P. Huntington, "The United States, of course, is the sole state with preeminence in every domain of power — economic, military, diplomatic, ideological, technological, and cultural — with the reach and capabilities to promote its interests in virtually every part of the world."
Most experts argue that this older assessment of global politics was too simplified, in part because of the difficulty in classifying the European Union at its current stage of development. Others argue that the notion of a superpower is outdated, considering complex global economic interdependencies, and propose that the world is multipolar. According to Samuel P. Huntington, "There is now only one superpower. But that does not mean that the world is unipolar. A unipolar system would have one superpower, no significant major powers, and many minor powers." Huntington thinks, "Contemporary international politics" ... "is instead a strange hybrid, a uni-multipolar system with one superpower and several major powers."
Additionally, there has been some recent speculation that the United States is declining in relative power as the rest of the world rises to match its levels of economic and technological development. Citing economic hardships, Cold War allies becoming less dependent on the United States, a declining dollar, the rise of other great powers around the world, and decreasing education, some experts have suggested the possibility of America losing its superpower status in the distant future or even at the present.
Potential superpowers
The present day governments to be called, or to remain, a potential superpower for the 21st century.
Academics and other qualified commentators sometimes identify potential superpowers thought to have a strong likelihood of being recognized as superpowers in the 21st century. The record of such predictions has not been perfect. For example in the 1980s some commentators thought Japan would become a superpower, due to its large GDP and high economic growth at the time.
Due to their large populations, growing military strength, and economic potential and influence in international affairs, People's Republic of China, the European Union, India, and Russia. are among the powers which are most often cited as having the ability to influence future world politics and reach the status of superpower in the 21st century. While some believe one (or more) of these countries will replace the United States as a superpower, others believe they will rise to rival, but not replace, the United States. Others have argued that the historical notion of a "superpower" is increasingly anachronistic in the 21st century as increased global integration and interdependence makes the projection of a superpower impossible.
This article needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (July 2008)
Visualization of the various routes through a portion of the Internet.
Internet portal
The Internet is a global system of interconnected computer networks that interchange data by packet switching using the standardized Internet Protocol Suite (TCP/IP). It is a "network of networks" that consists of millions of private and public, academic, business, and government networks of local to global scope that are linked by copper wires, fiber-optic cables, wireless connections, and other technologies.
The Internet carries various information resources and services, such as electronic mail, online chat, file transfer and file sharing, online gaming, and the inter-linked hypertext documents and other resources of the World Wide Web (WWW).
Terminology
The terms Internet and World Wide Web are often used in every-day speech without much distinction. However, the Internet and the World Wide Web are not one and the same. The Internet is a global data communications system. It is a hardware and software infrastructure that provides connectivity between computers. In contrast, the Web is one of the services communicated via the Internet. It is a collection of interconnected documents and other resources, linked by hyperlinks and URLs.
Creation
A 1946 comic science-fiction story, A Logic Named Joe, by Murray Leinster laid out the Internet and many of its strengths and weaknesses. However, it took more than a decade before reality began to catch up with this vision.
The USSR's launch of Sputnik spurred the United States to create the Advanced Research Projects Agency, known as ARPA, in February 1958 to regain a technological lead. ARPA created the Information Processing Technology Office (IPTO) to further the research of the Semi Automatic Ground Environment (SAGE) program, which had networked country-wide radar systems together for the first time. J. C. R. Licklider was selected to head the IPTO, and saw universal networking as a potential unifying human revolution.
Licklider moved from the Psycho-Acoustic Laboratory at Harvard University to MIT in 1950, after becoming interested in information technology. At MIT, he served on a committee that established Lincoln Laboratory and worked on the SAGE project. In 1957 he became a Vice President at BBN, where he bought the first production PDP-1 computer and conducted the first public demonstration of time-sharing.
At the IPTO, Licklider recruited Lawrence Roberts to head a project to implement a network, and Roberts based the technology on the work of Paul Baran,[citation needed] who had written an exhaustive study for the U.S. Air Force that recommended packet switching (as opposed to circuit switching) to make a network highly robust and survivable. After much work, the first two nodes of what would become the ARPANET were interconnected between UCLA and SRI International in Menlo Park, California, on October 29, 1969. The ARPANET was one of the "eve" networks of today's Internet. Following on from the demonstration that packet switching worked on the ARPANET, the British Post Office, Telenet, DATAPAC and TRANSPAC collaborated to create the first international packet-switched network service. In the UK, this was referred to as the International Packet Switched Service (IPSS), in 1978. The collection of X.25-based networks grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981. The X.25 packet switching standard was developed in the CCITT (now called ITU-T) around 1976. X.25 was independent of the TCP/IP protocols that arose from the experimental work of DARPA on the ARPANET, Packet Radio Net and Packet Satellite Net during the same time period. Vinton Cerf and Robert Kahn developed the first description of the TCP protocols during 1973 and published a paper on the subject in May 1974. Use of the term "Internet" to describe a single global TCP/IP network originated in December 1974 with the publication of RFC 675, the first full specification of TCP that was written by Vinton Cerf, Yogen Dalal and Carl Sunshine, then at Stanford University. During the next nine years, work proceeded to refine the protocols and to implement them on a wide range of operating systems.
The first TCP/IP-based wide-area network was operational by January 1, 1983 when all hosts on the ARPANET were switched over from the older NCP protocols. In 1985, the United States' National Science Foundation (NSF) commissioned the construction of the NSFNET, a university 56 kilobit/second network backbone using computers called "fuzzballs" by their inventor, David L. Mills. The following year, NSF sponsored the conversion to a higher-speed 1.5 megabit/second network. A key decision to use the DARPA TCP/IP protocols was made by Dennis Jennings, then in charge of the Supercomputer program at NSF.
The opening of the network to commercial interests began in 1988. The US Federal Networking Council approved the interconnection of the NSFNET to the commercial MCI Mail system in that year and the link was made in the summer of 1989. Other commercial electronic e-mail services were soon connected, including OnTyme, Telemail and Compuserve. In that same year, three commercial Internet service providers (ISP) were created: UUNET, PSINET and CERFNET. Important, separate networks that offered gateways into, then later merged with, the Internet include Usenet and BITNET. Various other commercial and educational networks, such as Telenet, Tymnet, Compuserve and JANET were interconnected with the growing Internet. Telenet (later called Sprintnet) was a large privately funded national computer network with free dial-up access in cities throughout the U.S. that had been in operation since the 1970s. This network was eventually interconnected with the others in the 1980s as the TCP/IP protocol became increasingly popular. The ability of TCP/IP to work over virtually any pre-existing communication networks allowed for a great ease of growth, although the rapid growth of the Internet was due primarily to the availability of commercial routers from companies such as Cisco Systems, Proteon and Juniper, the availability of commercial Ethernet equipment for local-area networking and the widespread implementation of TCP/IP on the UNIX operating system.
Growth
Although the basic applications and guidelines that make the Internet possible had existed for almost a decade, the network did not gain a public face until the 1990s. On August 6, 1991, CERN, which straddles the border between France and Switzerland, publicized the new World Wide Web project. The Web was invented by English scientist Tim Berners-Lee in 1989.
An early popular web browser was ViolaWWW, patterned after HyperCard and built using the X Window System. It was eventually replaced in popularity by the Mosaic web browser. In 1993, the National Center for Supercomputing Applications at the University of Illinois released version 1.0 of Mosaic, and by late 1994 there was growing public interest in the previously academic, technical Internet. By 1996 usage of the word Internet had become commonplace, and consequently, so had its use as a synecdoche in reference to the World Wide Web.
Meanwhile, over the course of the decade, the Internet successfully accommodated the majority of previously existing public computer networks (although some networks, such as FidoNet, have remained separate). During the 1990s, it was estimated that the Internet grew by 100% per year, with a brief period of explosive growth in 1996 and 1997. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary open nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network.
University students' appreciation and contributions
New findings in the field of communications during the 1960s, 1970s and 1980s were quickly adopted by universities across North America.
Examples of early university Internet communities are Cleveland FreeNet, Blacksburg Electronic Village and NSTN in Nova Scotia. Students took up the opportunity of free communications and saw this new phenomenon as a tool of liberation. Personal computers and the Internet would free them from corporations and governments (Nelson, Jennings, Stallman).
Graduate students played a huge part in the creation of ARPANET. In the 1960s, the network working group, which did most of the design for ARPANET's protocols, was composed mainly of graduate students.
Today's Internet
The My Opera Community server rack. From the top, user file storage (content of files.myopera.com), "bigma" (the master MySQL database server), and two IBM blade centers containing multi-purpose machines (Apache front ends, Apache back ends, slave MySQL database servers, load balancers, file servers, cache servers and sync masters).
Aside from the complex physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe how to exchange data over the network. Indeed, the Internet is defined by its interconnections and routing policies.
As of June 30, 2008, 1.463 billion people use the Internet according to Internet World Stats.
Internet protocols
The complex communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. While the hardware can often be used to support other software systems, it is the design and the rigorous standardization process of the software architecture that characterizes the Internet.
The responsibility for the architectural design of the Internet software systems has been delegated to the Internet Engineering Task Force (IETF). The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. Resulting discussions and final standards are published in Request for Comments (RFCs), freely available on the IETF web site.
The principal methods of networking that enable the Internet are contained in a series of RFCs that constitute the Internet Standards. These standards describe a system known as the Internet Protocol Suite. This is a model architecture that divides methods into a layered system of protocols (RFC 1122, RFC 1123). The layers correspond to the environment or scope in which their services operate. At the top is the space (Application Layer) of the software application, e.g., a web browser application, and just below it is the Transport Layer which connects applications on different hosts via the network (e.g., client-server model). The underlying network consists of two layers: the Internet Layer which enables computers to connect to one-another via intermediate (transit) networks and thus is the layer that establishes internetworking and the Internet, and lastly, at the bottom, is a software layer that provides connectivity between hosts on the same local link (therefor called Link Layer), e.g., a local area network (LAN) or a dial-up connection. This model is also known as the TCP/IP model of networking. While other models have been developed, such as the Open Systems Interconnection (OSI) model, they are not compatible in the details of description, nor implementation.
The most prominent component of the Internet model is the Internet Protocol (IP) which provides addressing systems for computers on the Internet and facilitates the internetworking of networks. IP Version 4 (IPv4) is the initial version used on the first generation of the today's Internet and is still in dominant use. It was designed to address up to ~4.3 billion (109) Internet hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion. A new protocol version, IPv6, was developed which provides vastly larger addressing capabilities and more efficient routing of data traffic. IPv6 is currently in commercial deployment phase around the world.
IPv6 is not interoperable with IPv4. It essentially establishes a "parallel" version of the Internet not accessible with IPv4 software. This means software upgrades are necessary for every networking device that needs to communicate on the IPv6 Internet. Most modern computer operating systems are already converted to operate with both version of the Internet Protocol. Network infrastructures, however, are still lagging in this development.
Prior to the widespread internetworking that led to the Internet, most communication networks were limited by their nature to only allow communications between the stations on the network, and the prevalent computer networking method was based on the central mainframe method. In the 1960s, computer researchers, Levi C. Finch and Robert W. Taylor pioneered calls for a joined-up global network to address interoperability problems. Concurrently, several research programs began to research principles of networking between separate physical networks, and this led to the development of Packet switching. These included Donald Davies (NPL), Paul Baran (RAND Corporation), and Leonard Kleinrock's MIT and UCLA research programs.
This led to the development of several packet switched networking solutions in the late 1960s and 1970s, including ARPANET and X.25. Additionally, public access and hobbyist networking systems grew in popularity, including UUCP and FidoNet. They were however still disjointed separate networks, served only by limited gateways between networks. This led to the application of packet switching to develop a protocol for inter-networking, where multiple different networks could be joined together into a super-framework of networks. By defining a simple common network system, the Internet protocol suite, the concept of the network could be separated from its physical implementation. This spread of inter-network began to form into the idea of a global inter-network that would be called 'The Internet', and this began to quickly spread as existing networks were converted to become compatible with this. This spread quickly across the advanced telecommunication networks of the western world, and then began to penetrate into the rest of the world as it became the de-facto international standard and global network. However, the disparity of growth led to a digital divide that is still a concern today.
Following commercialisation and introduction of privately run Internet Service Providers in the 1980s, and its expansion into popular use in the 1990s, the Internet has had a drastic impact on culture and commerce. This includes the rise of near instant communication by e-mail, text based discussion forums, the World Wide Web. Investor speculation in new markets provided by these innovations would also lead to the inflation and collapse of the Dot-com bubble, a major market collapse. But despite this, Internet continues to grow.
Before the Internet
In the 1950s and early 1960s, prior to the widespread inter-networking that led to the Internet, most communication networks were limited by their nature to only allow communications between the stations on the network. Some networks had gateways or bridges between them, but these bridges were often limited or built specifically for a single use. One prevalent computer networking method was based on the central mainframe method, simply allowing its terminals to be connected via long leased lines. This method was used in the 1950s by Project RAND to support researchers such as Herbert Simon, in Pittsburgh, Pennsylvania, when collaborating across the continent with researchers in Sullivan, Illinois, on automated theorem proving and artificial intelligence.
Three terminals and an ARPA
A fundamental pioneer in the call for a global network, J.C.R. Licklider, articulated the ideas in his January 1960 paper, Man-Computer Symbiosis.
"A network of such [computers], connected to one another by wide-band communication lines [which provided] the functions of present-day libraries together with anticipated advances in information storage and retrieval and other symbiotic functions."
—J.C.R. Licklider,
In October 1962, Licklider was appointed head of the United States Department of Defense's Advanced Research Projects Agency, now known as DARPA, within the information processing office. There he formed an informal group within DARPA to further computer research. As part of the information processing office's role, three network terminals had been installed: one for System Development Corporation in Santa Monica, one for Project Genie at the University of California, Berkeley and one for the Compatible Time-Sharing System project at the Massachusetts Institute of Technology (MIT). Licklider's identified need for inter-networking would be made obviously evident by the problems this caused.
"For each of these three terminals, I had three different sets of user commands. So if I was talking online with someone at S.D.C. and I wanted to talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them. [...] I said, it's obvious what to do (But I don't want to do it): If you have these three terminals, there ought to be one terminal that goes anywhere you want to go where you have interactive computing. That idea is the ARPAnet."
—Robert W. Taylor, co-writer with Licklider of "The Computer as a Communications Device", in an interview with the New York Times,
Packet switching
At the tip of the inter-networking problem lay the issue of connecting separate physical networks to form one logical network, with much wasted capacity inside the assorted separate networks. During the 1960s, Donald Davies (NPL), Paul Baran (RAND Corporation), and Leonard Kleinrock (MIT) developed and implemented packet switching. The notion that the Internet was developed to survive a nuclear attack has its roots in the early theories developed by RAND, but is an urban legend, not supported by any Internet Engineering Task Force or other document. Early networks used for the command and control of nuclear forces were message switched, not packet-switched, although current strategic military networks are, indeed, packet-switching and connectionless. Baran's research had approached packet switching from studies of decentralisation to avoid combat damage compromising the entire network.[3]
Networks that led to the Internet
ARPANET
Len Kleinrock and the first IMP.
Promoted to the head of the information processing office at DARPA, Robert Taylor intended to realize Licklider's ideas of an interconnected networking system. Bringing in Larry Roberts from MIT, he initiated a project to build such a network. The first ARPANET link was established between the University of California, Los Angeles and the Stanford Research Institute on 22:30 hours on October 29, 1969. By 5 December 1969, a 4-node network was connected by adding the University of Utah and the University of California, Santa Barbara. Building on ideas developed in ALOHAnet, the ARPANET grew rapidly. By 1981, the number of hosts had grown to 213, with a new host being added approximately every twenty days.
ARPANET became the technical core of what would become the Internet, and a primary tool in developing the technologies used. ARPANET development was centered around the Request for Comments (RFC) process, still used today for proposing and distributing Internet Protocols and Systems. RFC 1, entitled "Host Software", was written by Steve Crocker from the University of California, Los Angeles, and published on April 7, 1969. These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing.
International collaborations on ARPANET were sparse. For various political reasons, European developers were concerned with developing the X.25 networks. Notable exceptions were the Norwegian Seismic Array (NORSAR) in 1972, followed in 1973 by Sweden with satellite links to the Tanum Earth Station and University College London.
X.25 and public access
Following on from ARPA's research, packet switching network standards were developed by the International Telecommunication Union (ITU) in the form of X.25 and related standards. In 1974, X.25 formed the basis for the SERCnet network between British academic and research sites, which later became JANET. The initial ITU Standard on X.25 was approved in March 1976. This standard was based on the concept of virtual circuits.
The British Post Office, Western Union International and Tymnet collaborated to create the first international packet switched network, referred to as the International Packet Switched Service (IPSS), in 1978. This network grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981. By the 1990s it provided a worldwide networking infrastructure.
Unlike ARPAnet, X.25 was also commonly available for business use. Telenet offered its Telemail electronic mail service, but this was oriented to enterprise use rather than the general email of ARPANET.
The first dial-in public networks used asynchronous TTY terminal protocols to reach a concentrator operated by the public network. Some public networks, such as CompuServe used X.25 to multiplex the terminal sessions into their packet-switched backbones, while others, such as Tymnet, used proprietary protocols. In 1979, CompuServe became the first service to offer electronic mail capabilities and technical support to personal computer users. The company broke new ground again in 1980 as the first to offer real-time chat with its CB Simulator. There were also the America Online (AOL) and Prodigy dial in networks and many bulletin board system (BBS) networks such as FidoNet. FidoNet in particular was popular amongst hobbyist computer users, many of them hackers and amateur radio operators.
UUCP
In 1979, two students at Duke University, Tom Truscott and Jim Ellis, came up with the idea of using simple Bourne shell scripts to transfer news and messages on a serial line with nearby University of North Carolina at Chapel Hill. Following public release of the software, the mesh of UUCP hosts forwarding on the Usenet news rapidly expanded. UUCPnet, as it would later be named, also created gateways and links between FidoNet and dial-up BBS hosts. UUCP networks spread quickly due to the lower costs involved, and ability to use existing leased lines, X.25 links or even ARPANET connections. By 1981 the number of UUCP hosts had grown to 550, nearly doubling to 940 in 1984.
Merging the networks and creating the Internet
TCP/IP
Main article: Internet protocol suite
Map of the TCP/IP test network in January 1982
With so many different network methods, something was needed to unify them. Robert E. Kahn of DARPA and ARPANET recruited Vinton Cerf of Stanford University to work with him on the problem. By 1973, they had soon worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. Cerf credits Hubert Zimmerman, Gerard LeLann and Louis Pouzin (designer of the CYCLADES network) with important work on this design.
At this time, the earliest known use of the term Internet was by Vinton Cerf, who wrote:
“ Specification of Internet Transmission Control Program. ”
"Request for Comments No. 675" (Network Working Group, electronic text (1974)
With the role of the network reduced to the bare minimum, it became possible to join almost any networks together, no matter what their characteristics were, thereby solving Kahn's initial problem. DARPA agreed to fund development of prototype software, and after several years of work, the first somewhat crude demonstration of a gateway between the Packet Radio network in the SF Bay area and the ARPANET was conducted. On November 22, 1977 a three network demonstration was conducted including the ARPANET, the Packet Radio Network and the Atlantic Packet Satellite network—all sponsored by DARPA. Stemming from the first specifications of TCP in 1974, TCP/IP emerged in mid-late 1978 in nearly final form. By 1981, the associated standards were published as RFCs 791, 792 and 793 and adopted for use. DARPA sponsored or encouraged the development of TCP/IP implementations for many operating systems and then scheduled a migration of all hosts on all of its packet networks to TCP/IP. On 1 January 1983, TCP/IP protocols became the only approved protocol on the ARPANET, replacing the earlier NCP protocol.
ARPANET to Several Federal Wide Area Networks: MILNET, NSI, and NSFNet
After the ARPANET had been up and running for several years, ARPA looked for another agency to hand off the network to; ARPA's primary mission was funding cutting edge research and development, not running a communications utility. Eventually, in July 1975, the network had been turned over to the Defense Communications Agency, also part of the Department of Defense. In 1983, the U.S. military portion of the ARPANET was broken off as a separate network, the MILNET. MILNET subsequently became the unclassified but military-only NIPRNET, in parallel with the SECRET-level SIPRNET and JWICS for TOP SECRET and above. NIPRNET does have controlled security gateways to the public Internet.
The networks based around the ARPANET were government funded and therefore restricted to noncommercial uses such as research; unrelated commercial use was strictly forbidden. This initially restricted connections to military sites and universities. During the 1980s, the connections expanded to more educational institutions, and even to a growing number of companies such as Digital Equipment Corporation and Hewlett-Packard, which were participating in research projects or providing services to those who were.
Several other branches of the U.S. government, the National Aeronautics and Space Agency (NASA), the National Science Foundation (NSF), and the Department of Energy (DOE) became heavily involved in internet research and started development of a successor to ARPANET. In the mid 1980s all three of these branches developed the first Wide Area Networks based on TCP/IP. NASA developed the NASA Science Network, NSF developed CSNET and DOE evolved the Energy Sciences Network or ESNet.
More explicitly, NASA developed a TCP/IP based Wide Area Network, NASA Science Network (NSN), in the mid 1980s connecting space scientists to data and information stored anywhere in the world. In 1989, the DECnet-based Space Physics Analysis Network (SPAN) and the TCP/IP-based NASA Science Network (NSN) were brought together at NASA Ames Research Center creating the first multiprotocol wide area network called the NASA Science Internet, or NSI. NSI was established to provide a total integrated communications infrastructure to the NASA scientific community for the advancement of earth, space and life sciences. As a high-speed, multiprotocol, international network, NSI provided connectivity to over 20,000 scientists across all seven continents.
In 1984 NSF developed CSNET exclusively based on TCP/IP. CSNET connected with ARPANET using TCP/IP, and ran TCP/IP over X.25, but it also supported departments without sophisticated network connections, using automated dial-up mail exchange. This grew into the NSFNet backbone, established in 1986, and intended to connect and provide access to a number of supercomputing centers established by the NSF.
Transition towards an Internet
The term "Internet" was adopted in the first RFC published on the TCP protocol (RFC 675: Internet Transmission Control Program, December 1974). It was around the time when ARPANET was interlinked with NSFNet, that the term Internet came into more general use, with "an internet" meaning any network using TCP/IP. "The Internet" came to mean a global and large network using TCP/IP. Previously "internet" and "internetwork" had been used interchangeably, and "internet protocol" had been used to refer to other networking systems such as Xerox Network Services.
As interest in wide spread networking grew and new applications for it arrived, the Internet's technologies spread throughout the rest of the world. TCP/IP's network-agnostic approach meant that it was easy to use any existing network infrastructure, such as the IPSS X.25 network, to carry Internet traffic. In 1984, University College London replaced its transatlantic satellite links with TCP/IP over IPSS.
Many sites unable to link directly to the Internet started to create simple gateways to allow transfer of e-mail, at that time the most important application. Sites which only had intermittent connections used UUCP or FidoNet and relied on the gateways between these networks and the Internet. Some gateway services went beyond simple e-mail peering, such as allowing access to FTP sites via UUCP or e-mail.
TCP/IP becomes worldwide
The first ARPANET connection outside the US was established to NORSAR in Norway in 1973, just ahead of the connection to Great Britain. These links were all converted to TCP/IP in 1982, at the same time as the rest of the ARPANET.
CERN, the European internet, the link to the Pacific and beyond
Between 1984 and 1988 CERN began installation and operation of TCP/IP to interconnect its major internal computer systems, workstations, PCs and an accelerator control system. CERN continued to operate a limited self-developed system CERNET internally and several incompatible (typically proprietary) network protocols externally. There was considerable resistance in Europe towards more widespread use of TCP/IP and the CERN TCP/IP intranets remained isolated from the Internet until 1989.
In 1988 Daniel Karrenberg, from CWI in Amsterdam, visited Ben Segal, CERN's TCP/IP Coordinator, looking for advice about the transition of the European side of the UUCP Usenet network (much of which ran over X.25 links) over to TCP/IP. In 1987, Ben Segal had met with Len Bosack from the then still small company Cisco about purchasing some TCP/IP routers for CERN, and was able to give Karrenberg advice and forward him on to Cisco for the appropriate hardware. This expanded the European portion of the Internet across the existing UUCP networks, and in 1989 CERN opened its first external TCP/IP connections. This coincided with the creation of Réseaux IP Européens (RIPE), initially a group of IP network administrators who met regularly to carry out co-ordination work together. Later, in 1992, RIPE was formally registered as a cooperative in Amsterdam.
At the same time as the rise of internetworking in Europe, ad hoc networking to ARPA and in-between Australian universities formed, based on various technologies such as X.25 and UUCPNet. These were limited in their connection to the global networks, due to the cost of making individual international UUCP dial-up or X.25 connections. In 1989, Australian universities joined the push towards using IP protocols to unify their networking infrastructures. AARNet was formed in 1989 by the Australian Vice-Chancellors' Committee and provided a dedicated IP based network for Australia.
The Internet began to penetrate Asia in the late 1980s. Japan, which had built the UUCP-based network JUNET in 1984, connected to NSFNet in 1989. It hosted the annual meeting of the Internet Society, INET'92, in Kobe. Singapore developed TECHNET in 1990, and Thailand gained a global Internet connection between Chulalongkorn University and UUNET in 1992.
Digital divide
While developed countries with technological infrastructures were joining the Internet, developing countries began to experience a digital divide separating them from the Internet. On an essentially continental basis, they are building organizations for Internet resource administration and sharing operational experience, as more and more transmission facilities go into place.
Africa
At the beginning of the 1990s, African countries relied upon X.25 IPSS and 2400 baud modem UUCP links for international and internetwork computer communications. In 1996 a USAID funded project, the Leland initiative, started work on developing full Internet connectivity for the continent. Guinea, Mozambique, Madagascar and Rwanda gained satellite earth stations in 1997, followed by Côte d'Ivoire and Benin in 1998.
Africa is building an Internet infrastructure. AfriNIC, headquartered in Mauritius, manages IP address allocation for the continent. As do the other Internet regions, there is an operational forum, the Internet Community of Operational Networking Specialists.
There are a wide range of programs both to provide high-performance transmission plant, and the western and southern coasts have undersea optical cable. High-speed cables join North Africa and the Horn of Africa to intercontinental cable systems. Undersea cable development is slower for East Africa; the original joint effort between New Partnership for Africa's Development (NEPAD) and the East Africa Submarine System (Eassy) has broken off and may become two efforts.
Asia and Oceania
The Asia Pacific Network Information Centre (APNIC), headquartered in Australia, manages IP address allocation for the continent. APNIC sponsors an operational forum, the Asia-Pacific Regional Internet Conference on Operational Technologies (APRICOT).
In 1991, the People's Republic of China saw its first TCP/IP college network, Tsinghua University's TUNET. The PRC went on to make its first global Internet connection in 1995, between the Beijing Electro-Spectrometer Collaboration and Stanford University's Linear Accelerator Center. However, China went on to implement its own digital divide by implementing a country-wide content filter.
Latin America
As with the other regions, the Latin American and Caribbean Internet Addresses Registry (LACNIC) manages the IP address space and other resources for its area. LACNIC, headquartered in Uruguay, operates DNS root, reverse DNS, and other key services.
Opening the network to commerce
The interest in commercial use of the Internet became a hotly debated topic. Although commercial use was forbidden, the exact definition of commercial use could be unclear and subjective. UUCPNet and the X.25 IPSS had no such restrictions, which would eventually see the official barring of UUCPNet use of ARPANET and NSFNet connections. Some UUCP links still remained connecting to these networks however, as administrators cast a blind eye to their operation.
During the late 1980s, the first Internet service provider (ISP) companies were formed. Companies like PSINet, UUNET, Netcom, and Portal Software were formed to provide service to the regional research networks and provide alternate network access, UUCP-based email and Usenet News to the public. The first dial-up on the West Coast, was Best Internet - now Verio, opened in 1986. The first dialup ISP in the East was world.std.com, opened in 1989.
This caused controversy amongst university users, who were outraged at the idea of noneducational use of their networks. Eventually, it was the commercial Internet service providers who brought prices low enough that junior colleges and other schools could afford to participate in the new arenas of education and research.
By 1990, ARPANET had been overtaken and replaced by newer networking technologies and the project came to a close. In 1994, the NSFNet, now renamed ANSNET (Advanced Networks and Services) and allowing non-profit corporations access, lost its standing as the backbone of the Internet. Both government institutions and competing commercial providers created their own backbones and interconnections. Regional network access points (NAPs) became the primary interconnections between the many networks and the final commercial restrictions ended.
IETF and a standard for standards
The Internet has developed a significant subculture dedicated to the idea that the Internet is not owned or controlled by any one person, company, group, or organization. Nevertheless, some standardization and control is necessary for the system to function.
The liberal Request for Comments (RFC) publication procedure engendered confusion about the Internet standardization process, and led to more formalization of official accepted standards. The IETF started in January 1985 as a quarterly meeting of U.S. government funded researchers. Representatives from non-government vendors were invited starting with the fourth IETF meeting in October of that year.
Acceptance of an RFC by the RFC Editor for publication does not automatically make the RFC into a standard. It may be recognized as such by the IETF only after experimentation, use, and acceptance have proved it to be worthy of that designation. Official standards are numbered with a prefix "STD" and a number, similar to the RFC naming style. However, even after becoming a standard, most are still commonly referred to by their RFC number.
In 1992, the Internet Society, a professional membership society, was formed and the IETF was transferred to operation under it as an independent international standards body.
NIC, InterNIC, IANA and ICANN
The first central authority to coordinate the operation of the network was the Network Information Centre (NIC) at Stanford Research Institute (SRI) in Menlo Park, California. In 1972, management of these issues was given to the newly created Internet Assigned Numbers Authority (IANA). In addition to his role as the RFC Editor, Jon Postel worked as the manager of IANA until his death in 1998.
As the early ARPANET grew, hosts were referred to by names, and a HOSTS.TXT file would be distributed from SRI International to each host on the network. As the network grew, this became cumbersome. A technical solution came in the form of the Domain Name System, created by Paul Mockapetris. The Defense Data Network—Network Information Center (DDN-NIC) at SRI handled all registration services, including the top-level domains (TLDs) of .mil, .gov, .edu, .org, .net, .com and .us, root nameserver administration and Internet number assignments under a United States Department of Defense contract. In 1991, the Defense Information Systems Agency (DISA) awarded the administration and maintenance of DDN-NIC (managed by SRI up until this point) to Government Systems, Inc., who subcontracted it to the small private-sector Network Solutions, Inc.
Since at this point in history most of the growth on the Internet was coming from non-military sources, it was decided that the Department of Defense would no longer fund registration services outside of the .mil TLD. In 1993 the U.S. National Science Foundation, after a competitive bidding process in 1992, created the InterNIC to manage the allocations of addresses and management of the address databases, and awarded the contract to three organizations. Registration Services would be provided by Network Solutions; Directory and Database Services would be provided by AT&T; and Information Services would be provided by General Atomics.
In 1998 both IANA and InterNIC were reorganized under the control of ICANN, a California non-profit corporation contracted by the US Department of Commerce to manage a number of Internet-related tasks. The role of operating the DNS system was privatized and opened up to competition, while the central management of name allocations would be awarded on a contract tender basis.
Use and culture
E-mail is often called the killer application of the Internet. However, it actually predates the Internet and was a crucial tool in creating it. E-mail started in 1965 as a way for multiple users of a time-sharing mainframe computer to communicate. Although the history is unclear, among the first systems to have such a facility were SDC's Q32 and MIT's CTSS.
The ARPANET computer network made a large contribution to the evolution of e-mail. There is one report indicating experimental inter-system e-mail transfers on it shortly after ARPANET's creation. In 1971 Ray Tomlinson created what was to become the standard Internet e-mail address format, using the @ sign to separate user names from host names.
A number of protocols were developed to deliver e-mail among groups of time-sharing computers over alternative transmission systems, such as UUCP and IBM's VNET e-mail system. E-mail could be passed this way between a number of networks, including ARPANET, BITNET and NSFNet, as well as to hosts connected directly to other sites via UUCP.
In addition, UUCP allowed the publication of text files that could be read by many others. The News software developed by Steve Daniel and Tom Truscott in 1979 was used to distribute news and bulletin board-like messages. This quickly grew into discussion groups, known as newsgroups, on a wide range of topics. On ARPANET and NSFNet similar discussion groups would form via mailing lists, discussing both technical issues and more culturally focused topics (such as science fiction, discussed on the sflovers mailing list).
From gopher to the WWW
History of the World Wide Web and World Wide Web
As the Internet grew through the 1980s and early 1990s, many people realized the increasing need to be able to find and organize files and information. Projects such as Gopher, WAIS, and the FTP Archive list attempted to create ways to organize distributed data. Unfortunately, these projects fell short in being able to accommodate all the existing data types and in being able to grow without bottlenecks.
One of the most promising user interface paradigms during this period was hypertext. The technology had been inspired by Vannevar Bush's "Memex" and developed through Ted Nelson's research on Project Xanadu and Douglas Engelbart's research on NLS. Many small self-contained hypertext systems had been created before, such as Apple Computer's HyperCard. Gopher became the first commonly-used hypertext interface to the Internet. While Gopher menu items were examples of hypertext, they were not commonly perceived in that way.
In 1989, whilst working at CERN, Tim Berners-Lee invented a network-based implementation of the hypertext concept. By releasing his invention to public use, he ensured the technology would become widespread. For his work in developing the world wide web, Berners-Lee received the Millennium technology prize in 2004. One early popular web browser, modeled after HyperCard, was ViolaWWW.
A potential turning point for the World Wide Web began with the introduction of the Mosaic web browser in 1993, a graphical browser developed by a team at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign (NCSA-UIUC), led by Marc Andreessen. Funding for Mosaic came from the High-Performance Computing and Communications Initiative, a funding program initiated by then-Senator Al Gore's High Performance Computing and Communication Act of 1991 also known as the Gore Bill . Indeed, Mosaic's graphical interface soon became more popular than Gopher, which at the time was primarily text-based, and the WWW became the preferred interface for accessing the Internet. (Gore's reference to his role in "creating the Internet", however, was ridiculed in his presidential election campaign. See the full article Al Gore and information technology).
Mosaic was eventually superseded in 1994 by Andreessen's Netscape Navigator, which replaced Mosaic as the world's most popular browser. While it held this title for some time, eventually competition from Internet Explorer and a variety of other browsers almost completely displaced it. Another important event held on January 11, 1994, was The Superhighway Summit at UCLA's Royce Hall. This was the "first public conference bringing together all of the major industry, government and academic leaders in the field [and] also began the national dialogue about the Information Superhighway and its implications."
24 Hours in Cyberspace, the "the largest one-day online event" (February 8, 1996) up to that date, took place on the then-active website, cyber24.com. It was headed by photographer Rick Smolan. A photographic exhibition was unveiled at the Smithsonian Institution's National Museum of American History on 23 January 1997, featuring 70 photos from the project.
Search engines(computing)
Even before the World Wide Web, there were search engines that attempted to organize the Internet. The first of these was the Archie search engine from McGill University in 1990, followed in 1991 by WAIS and Gopher. All three of those systems predated the invention of the World Wide Web but all continued to index the Web and the rest of the Internet for several years after the Web appeared. There are still Gopher servers as of 2006, although there are a great many more web servers.
As the Web grew, search engines and Web directories were created to track pages on the Web and allow people to find things. The first full-text Web search engine was WebCrawler in 1994. Before WebCrawler, only Web page titles were searched. Another early search engine, Lycos, was created in 1993 as a university project, and was the first to achieve commercial success. During the late 1990s, both Web directories and Web search engines were popular—Yahoo! (founded 1995) and Altavista (founded 1995) were the respective industry leaders.
By August 2001, the directory model had begun to give way to search engines, tracking the rise of Google (founded 1998), which had developed new approaches to relevancy ranking. Directory features, while still commonly available, became after-thoughts to search engines.
Database size, which had been a significant marketing feature through the early 2000s, was similarly displaced by emphasis on relevancy ranking, the methods by which search engines attempt to sort the best results first. Relevancy ranking first became a major issue circa 1996, when it became apparent that it was impractical to review full lists of results. Consequently, algorithms for relevancy ranking have continuously improved. Google's PageRank method for ordering the results has received the most press, but all major search engines continually refine their ranking methodologies with a view toward improving the ordering of results. As of 2006, search engine rankings are more important than ever, so much so that an industry has developed ("search engine optimizers", or "SEO") to help web-developers improve their search ranking, and an entire body of case law has developed around matters that affect search engine rankings, such as use of trademarks in metatags. The sale of search rankings by some search engines has also created controversy among librarians and consumer advocates.
Dot-com bubble
The suddenly low price of reaching millions worldwide, and the possibility of selling to or hearing from those people at the same moment when they were reached, promised to overturn established business dogma in advertising, mail-order sales, customer relationship management, and many more areas. The web was a new killer app—it could bring together unrelated buyers and sellers in seamless and low-cost ways. Visionaries around the world developed new business models, and ran to their nearest venture capitalist. Of course some of the new entrepreneurs were truly talented at business administration, sales, and growth; but the majority were just people with ideas, and didn't manage the capital influx prudently. Additionally, many dot-com business plans were predicated on the assumption that by using the Internet, they would bypass the distribution channels of existing businesses and therefore not have to compete with them; when the established businesses with strong existing brands developed their own Internet presence, these hopes were shattered, and the newcomers were left attempting to break into markets dominated by larger, more established businesses. Many did not have the ability to do so.
The dot-com bubble burst on March 10, 2000, when the technology heavy NASDAQ Composite index peaked at 5048.62 (intra-day peak 5132.52), more than double its value just a year before. By 2001, the bubble's deflation was running full speed. A majority of the dot-coms had ceased trading, after having burnt through their venture capital and IPO capital, often without ever making a profit.
Worldwide Online Population Forecast
In its "Worldwide Online Population Forecast, 2006 to 2011," JupiterResearch anticipates that a 38 percent increase in the number of people with online access will mean that, by 2011, 22 percent of the Earth's population will surf the Internet regularly.
JupiterResearch says the worldwide online population will increase at a compound annual growth rate of 6.6 percent during the next five years, far outpacing the 1.1 percent compound annual growth rate for the planet's population as a whole. The report says 1.1 billion people currently enjoy regular access to the Web.
North America will remain on top in terms of the number of people with online access. According to JupiterResearch, online penetration rates on the continent will increase from the current 70 percent of the overall North American population to 76 percent by 2011. However, Internet adoption has "matured," and its adoption pace has slowed, in more developed countries including the United States, Canada, Japan and much of Western Europe, notes the report.
As the online population of the United States and Canada grows by about only 3 percent, explosive adoption rates in China and India will take place, says JupiterResearch. The report says China should reach an online penetration rate of 17 percent by 2011 and India should hit 7 percent during the same time frame. This growth is directly related to infrastructure development and increased consumer purchasing power, notes JupiterResearch.
By 2011, Asians will make up about 42 percent of the world's population with regular Internet access, 5 percent more than today, says the study.
Penetration levels similar to North America's are found in Scandinavia and bigger Western European nations such as the United Kingdom and Germany, but JupiterResearch says that a number of Central European countries "are relative Internet laggards."
Brazil "with its soaring economy," is predicted by JupiterResearch to experience a 9 percent compound annual growth rate, the fastest in Latin America, but China and India are likely to do the most to boost the world's online penetration in the near future.
For the study, JupiterResearch defined "online users" as people who regularly access the Internet by "dedicated Internet access" devices. Those devices do not include cell phones.
Historiography
Some concerns have been raised over the historiography of the Internet's development. Specifically that it is hard to find documentation of much of the Internet's development, for several reasons, including a lack of centralized documentation for much of the early developments that led to the Internet.
"The Arpanet period is somewhat well documented because the corporation in charge - BBN - left a physical record. Moving into the NSFNET era, it became an extraordinarily decentralized process. The record exists in people's basements, in closets. [...] So much of what happened was done verbally and on the basis of individual trust."





