• Internet and the World Wide Web Purpose: to get acquainted with the Internet and its information system - the World Wide Web (WWW), with search methods. "World Wide Web" (World Wide Web, WWW)

    All greater place Internet takes over our lives. No other human-made technology has gained such widespread popularity. The Internet is the World Wide Web, which covers the entire globe, enveloping it with a network of TV towers. He began to gain his popularity back in the relatively distant 1990s. In the article, we will discuss where it came from and why it became so popular.

    Internet as World Wide Web

    The second name of such a plan was given for a reason. The fact is that the Internet unites many users around the world. Like a spider's web, it envelops the entire globe with its threads. And this is not an ordinary metaphor, it really is. The Internet is wires and wireless network, the latter of which are not visible to us.

    But this is a lyrical digression, in fact the Internet is connected with the World Wide Web (www, or the Word Wide Web). It covers all computers connected to the network. On remote servers, users store necessary information and can also communicate online. Often this name is understood as the World or Global Network.

    It is based on several very important protocols, like TCP/IP. Thanks to the Internet, the World Wide Web, or otherwise the Word Wide Web (WWW), carries out its activities, that is, it transmits and receives data.

    Number of users

    At the end of 2015, a study was conducted, on the basis of which the following data were obtained. The number of Internet users worldwide is 3.3 billion people. And this is almost 50% of the total population of our planet.

    Such high rates were achieved thanks to the spread of 3G cellular networks and high-speed 4G. Providers played an important role, thanks to the massive introduction of Internet technologies, the cost of maintaining servers and manufacturing fiber optic cables decreased. In most European countries, the Internet speed is higher than in African countries. This is explained by the technical lag of the latter and the low demand for the service.

    Why is the Internet called the World Wide Web?

    It is not paradoxical, but many users are sure that the above term and the Internet are one and the same. This deep misconception, hovering in the minds of many users, is caused by the similarity of concepts. Now we will figure out what's what.

    The World Wide Web is often confused with the similar phrase "World Wide Web". It is a certain volume of information based on the technology of the Internet.

    History of the World Wide Web

    By the end of the 1990s, the dominance of NSFNet over ARPANET technology was finally established in the world. Oddly enough, but one research center was engaged in their development. ARPNET was developed by order of the US War Department. Yes, yes, the first to use the Internet were the military. And NSFNet technology was developed independently of government agencies, almost on pure enthusiasm.

    It was the competition between the two developments that became the basis for their further development and mass introduction into the world. The World Wide Web became available to the general public in 1991. It had to work somehow, and Berners Lee took over the development of the system for the Internet. In two years of successful work, he created hypertext, or HTTP, the famous electronic language HTML and URL. We do not need to go into details, because now we see them as ordinary links for site addresses.

    Information space

    First of all, this is an information space, access to which is carried out through the Internet. It allows the user to have access to the data that is on the servers. If we use a visual-figurative way, then the Internet is a three-dimensional cylinder, and the World Wide Web is what fills it.

    Through a program called a "browser", the user gains access to the Internet to surf the Web. It consists of an innumerable set of sites that are based on servers. They are connected to computers and are responsible for saving, loading, viewing data.

    Spider webs and modern man

    Currently, Homo sapiens in developed countries are almost completely integrated with the World Wide Web. We are not talking about our grandparents or about remote villages, where they don’t even know about some kind of Internet.

    Previously, a person in search of information went straight to the library. And it often happened that the book he needed was not found, then he had to go to other institutions with archives. Now the need for such manipulations has disappeared.

    In biology, all species names are made up of three words, such as our full name Homo sapiens neanderthalensis. Now you can safely add the fourth word internetiys.

    The Internet is taking over the minds of mankind

    Agree, we draw almost all information from the Internet. We have tons of information in our hands. Tell our ancestor about this, he would greedily bury himself in the monitor screen and sit there all his free time in search of information.

    It was the Internet that brought humanity to a fundamentally new level, it contributes to the creation of a new culture - mixed or multi. Representatives of different nations mimic and adapt, as if merging their customs into one cauldron. Where does the final product come from?

    It is especially useful for scientists, there is no longer a need to gather at consultations in a country that is 1000 km away from yours. You can exchange experiences without a personal meeting, for example, via messengers or social media. And if important question If you need to discuss, then you can do it via Skype.

    Conclusion

    The World Wide Web is a component of the Internet. Its work is ensured thanks to storage servers, which provide information to the user upon request. The Web itself was developed thanks to US scientists and their enthusiasm.

    Structure and principles of the World Wide Web

    World Wide Web around Wikipedia

    The World Wide Web is made up of millions of Internet web servers located around the world. A web server is a program that runs on a computer connected to a network and uses the HTTP protocol to transfer data. In its simplest form, such a program receives an HTTP request for a specific resource over the network, finds the corresponding file on the local hard drive, and sends it over the network to the requesting computer. More sophisticated web servers are capable of dynamically allocating resources in response to an HTTP request. To identify resources (often files or their parts) on the World Wide Web, uniform resource identifiers (URIs) are used. Uniform Resource Identifier). Uniform resource locators (URLs) are used to locate resources on the network. Uniform Resource Locator). These URL locators combine URI identification technology and the DNS domain name system. Domain Name System) - a domain name (or directly - an address in a numerical notation) is included in the URL to designate a computer (more precisely, one of its network interfaces) that executes the code of the desired web server.

    To review the information received from the web server, a special program is used on the client computer - a web browser. The main function of a web browser is to display hypertext. The World Wide Web is inextricably linked with the concepts of hypertext and hyperlinks. Much of the information on the Web is hypertext. To facilitate the creation, storage and display of hypertext on the World Wide Web, the HTML language is traditionally used (Eng. Hyper Text Markup Language), a hypertext markup language. The work of marking up hypertext is called layout, markup masters are called a webmaster or webmaster (without a hyphen). After HTML markup, the resulting hypertext is placed in a file, such an HTML file is the main resource of the World Wide Web. Once an HTML file is made available to a web server, it is referred to as a "web page". A set of web pages forms a website. Hyperlinks are added to the hypertext of web pages. Hyperlinks help users of the World Wide Web easily navigate between resources (files), regardless of whether the resources are located on the local computer or on a remote server. Web hyperlinks are based on URL technology.

    World Wide Web Technologies

    To improve the visual perception of the web, CSS technology has become widely used, which allows you to set uniform design styles for many web pages. Another innovation worth paying attention to is the URN resource naming system (eng. Uniform Resource Name).

    A popular development concept for the World Wide Web is the creation of the Semantic Web. The Semantic Web is an add-on to the existing World Wide Web, which is designed to make the information posted on the network more understandable to computers. The Semantic Web is the concept of a network in which each resource in human language would be provided with a description understandable to a computer. The Semantic Web opens up access to clearly structured information for any application, regardless of platform and regardless of programming languages. Programs will be able to find the necessary resources themselves, process information, classify data, identify logical relationships, draw conclusions, and even make decisions based on these conclusions. If widely adopted and implemented well, the Semantic Web has the potential to revolutionize the Internet. For creating understandable to the computer resource descriptions, the Semantic Web uses the RDF format (Eng. Resource Description Framework ), which is based on XML syntax and uses URIs to identify resources. New in this area is RDFS (English) Russian (English) RDF Schema) and SPARQL (eng. Protocol And RDF Query Language ) (pronounced "sparkle"), a new query language for fast access to RDF data.

    History of the World Wide Web

    Tim Berners-Lee and, to a lesser extent, Robert Cayo are considered the inventors of the World Wide Web. Tim Berners-Lee is the author of HTTP, URI/URL and HTML technologies. In 1980 he worked for the European Council for Nuclear Research (fr. Conseil Européen pour la Recherche Nucléaire, CERN ) software consultant. It was there, in Geneva (Switzerland), that he wrote the Enquire program for his own needs. Enquire, loosely translated as "Interrogator"), which used random associations to store data and laid the conceptual foundation for the World Wide Web.

    The world's first website was hosted by Berners-Lee on August 6, 1991 on the first web server available at http://info.cern.ch/, (). Resource defined concept world wide web, contained instructions for setting up a web server, using a browser, etc. This site was also the world's first Internet directory because Tim Berners-Lee later hosted and maintained a list of links to other sites there.

    The first photo on the World Wide Web was of the parody filk band Les Horribles Cernettes. Tim Bernes-Lee asked for their scans from the bandleader after the CERN Hardronic Festival.

    But still theoretical basis The webs were laid down much earlier than Berners-Lee. Back in 1945 Vannaver Bush developed the concept of Memex (English) Russian - auxiliary mechanical means of "expansion of human memory". Memex is a device in which a person stores all his books and records (and ideally, all his knowledge that can be formally described) and which gives out the necessary information with sufficient speed and flexibility. It is an extension and addition to human memory. Bush also predicted a comprehensive indexing of texts and multimedia resources with the ability to quick search the necessary information. The next significant step towards the World Wide Web was the creation of hypertext (a term coined by Ted Nelson in 1965).

    • The Semantic Web involves improving the connectivity and relevance of information on the World Wide Web through the introduction of new metadata formats.
    • The Social Web relies on the work of organizing the information available on the Web, performed by the users of the Web themselves. Within the second direction, developments that are part of the semantic web are actively used as tools (RSS and other web feed formats, OPML, XHTML microformats). Partially semantized sections of the Wikipedia Category Tree help users consciously navigate in the information space, however, very mild requirements for subcategories do not give reason to hope for an expansion of such sections. In this regard, attempts to compile Knowledge atlases may be of interest.

    There is also a popular concept Web 2.0, summarizing several directions of development of the World Wide Web at once.

    Ways to actively display information on the World Wide Web

    Information on the web can be displayed either passively (that is, the user can only read it) or actively - then the user can add information and edit it. Ways to actively display information on the World Wide Web include:

    It should be noted that this division is very conditional. So, say, a blog or a guest book can be considered as a special case of a forum, which, in turn, is a special case of a content management system. Usually the difference is manifested in the purpose, approach and positioning of a particular product.

    Some of the information from websites can also be accessed through speech. India has already begun testing a system that makes the text content of pages accessible even to people who cannot read and write.

    world wide Web is sometimes ironically called Wild Wild Web (wild, wild Web) - by analogy with the title of the movie of the same name Wild Wild West (Wild, Wild West).

    see also

    Notes

    Literature

    • Fielding, R.; Gettys, J.; Mogul, J.; Frystick, G.; Mazinter, L.; Leach, P.; Berners-Lee, T. (June 1999). "Hypertext Transfer Protocol - http://1.1" (Information Sciences Institute).
    • Berners-Lee, Tim; Bray, Tim; Connolly, Dan; Cotton, Paul; Fielding, Roy; Jackle, Mario; Lilly, Chris; Mendelsohn, Noah; Orcard, David; Walsh, Norman; Williams, Stewart (December 15, 2004). "Architecture of the World Wide Web, Volume One" (W3C).
    • Polo, Luciano World Wide Web Technology Architecture: A Conceptual Analysis. New Devices(2003). Archived from the original on August 24, 2011. Retrieved July 31, 2005.

    Links

    • Official website of the World Wide Web Consortium (W3C) (English)
    • Tim Berners-Lee, Mark Fischetti. Weaving the Web: The Origins and Future of the World Wide Web = Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web. - New York: HarperCollins Publishers (English) Russian . - 256p. - ISBN 0-06-251587-X, ISBN 978-0-06-251587-2(English)
    Other organizations involved in the development of the World Wide Web and the Internet in general

    The World Wide Web is made up of hundreds of millions of web servers. Most of the resources on the World Wide Web are based on hypertext technology. Hypertext documents hosted on the World Wide Web are called web pages. Several web pages that are united by a common theme, design, and also linked by links and usually located on the same web server are called. Used to download and view web pages special programs- browsers ( browser).

    The World Wide Web has caused a real revolution in information technology and an explosion in the development of the Internet. Often, when talking about the Internet, they mean the World Wide Web, but it is important to understand that this is not the same thing.

    Structure and principles of the World Wide Web

    The World Wide Web is made up of millions of Internet web servers located around the world. A web server is a computer program that runs on a computer connected to a network and uses the HTTP protocol to transfer data. In its simplest form, such a program receives an HTTP request for a specific resource over the network, finds the corresponding file on the local hard drive, and sends it over the network to the requesting computer. More sophisticated web servers are capable of dynamically generating documents in response to an HTTP request using templates and scripts.

    To view information received from a web server, a special program is used on a client computer - a web browser. The main function of a web browser is to display hypertext. The World Wide Web is inextricably linked with the concepts of hypertext and hyperlinks. Much of the information on the Web is hypertext.

    To facilitate the creation, storage and display of hypertext on the World Wide Web, the HTML language is traditionally used ( Hyper Text Markup Language"Hypertext Markup Language"). The work of creating (marking up) hypertext documents is called layout, it is done by a webmaster or a separate markup specialist - a layout designer. After HTML markup, the resulting document is saved to a file, and such HTML files are the main type of resources on the World Wide Web. Once an HTML file is made available to a web server, it is referred to as a "web page". A set of web pages forms .

    The hypertext of web pages contains hyperlinks. Hyperlinks help users of the World Wide Web easily navigate between resources (files), regardless of whether the resources are located on the local computer or on a remote server. Uniform resource locators (URLs) are used to locate resources on the World Wide Web. Uniform Resource Locator). For example, the full URL of the main page of the Russian section of Wikipedia looks like this: http://ru.wikipedia.org/wiki/Main_page. These URL locators combine URI identification technology (Eng. Uniform Resource Identifier Uniform Resource Identifier) ​​and the Domain Name System (DNS). Domain Name System). domain name (in this case ru.wikipedia.org) as part of a URL denotes a computer (more precisely, one of its network interfaces) that executes the code of the desired web server. The URL of the current page can usually be seen in address bar browser, although many modern browsers prefer to show only the domain name of the current site by default.

    World Wide Web Technologies

    To improve the visual perception of the web, CSS technology has become widely used, which allows you to set uniform design styles for many web pages. Another innovation worth paying attention to is the URN resource naming system (eng. Uniform Resource Name).

    A popular development concept for the World Wide Web is the creation of the Semantic Web. The Semantic Web is an add-on to the existing World Wide Web, which is designed to make the information posted on the network more understandable to computers. The Semantic Web is the concept of a network in which each resource in human language would be provided with a description understandable to a computer. The Semantic Web provides access to clearly structured information for any application, regardless of platform and regardless of programming languages. Programs will be able to find the necessary resources themselves, process information, classify data, identify logical relationships, draw conclusions, and even make decisions based on these conclusions. If widely adopted and implemented well, the Semantic Web has the potential to revolutionize the Internet. To create a computer-friendly description of a resource, the Semantic Web uses the RDF format (Eng. Resource Description Framework), which is based on XML syntax and uses URIs to identify resources. New in this area is RDFS (English) Russian. (English) RDF Schema) and SPARQL (eng. Protocol And RDF Query Language) (pronounced "sparkle"), a new query language for fast access to RDF data.

    History of the World Wide Web

    Tim Berners-Lee and, to a lesser extent, Robert Cayo are considered the inventors of the World Wide Web. Tim Berners-Lee is the author of HTTP, URI/URL and HTML technologies. In 1980 he worked for the European Council for Nuclear Research (fr. Conseil Européen pour la Recherche Nucléaire, CERN) software consultant. It was there, in Geneva (Switzerland), that he wrote the Enquire program for his own needs. Enquire, loosely translated as "Interrogator"), which used random associations to store data and laid the conceptual foundation for the World Wide Web.

    In 1989, while working at CERN on the organization's intranet, Tim Berners-Lee proposed the global hypertext project, now known as the World Wide Web. The project involved the publication of hypertext documents interconnected by hyperlinks, which would facilitate the search and consolidation of information for CERN scientists. To implement the project, Tim Berners-Lee (together with his assistants) invented URIs, the HTTP protocol, and the HTML language. These are technologies without which it is already impossible to imagine modern Internet. Between 1991 and 1993, Berners-Lee improved technical specifications these standards and published them. But, nevertheless, 1989 should be considered the official year of the birth of the World Wide Web.

    As part of the project, Berners-Lee wrote the world's first httpd web server and the world's first hypertext web browser called WorldWideWeb. This browser was also a WYSIWYG editor (short for English. What You See Is What You Get- what you see is what you get), its development was started in October 1990, and completed in December of the same year. The program ran in the NeXTStep environment and began to spread over the Internet in the summer of 1991.

    Mike Sendall buys a NeXT cube computer at this time in order to understand what the features of its architecture are, and then gives it to Tim [Berners-Lee]. Through perfection software system NeXT cube Tim wrote a prototype illustrating the main provisions of the project in a few months. It was an impressive result: the prototype offered users, among other things, advanced features such as WYSIWYG browsing/authoring!… . The only thing I insisted on was that the name should not be once again extracted from the same Greek mythology. Tim suggested the World Wide Web. I immediately liked everything in this name, only it is difficult to pronounce in French.

    The world's first website was hosted by Berners-Lee on August 6, 1991 on the first web server available at http://info.cern.ch/, (archived here). Resource defined concept world wide web, contained instructions for setting up a web server, using a browser, etc. This site was also the world's first Internet directory because Tim Berners-Lee later hosted and maintained a list of links to other sites there.

    The first photo on the World Wide Web showed the parody filk band Les Horribles Cernettes. Tim Bernes-Lee asked for their scans from the bandleader after the CERN Hardronic Festival.

    Yet the theoretical foundations of the web were laid much earlier than Berners-Lee. Back in 1945, Vannaver Bush developed the concept of Memex - auxiliary mechanical means of "expanding human memory." Memex is a device in which a person stores all his books and records (and ideally, all his knowledge that can be formally described) and which gives out the necessary information with sufficient speed and flexibility. It is an extension and addition to human memory. Bush also predicted a comprehensive indexing of texts and multimedia resources with the ability to quickly find the necessary information. The next significant step towards the World Wide Web was the creation of hypertext (a term coined by Ted Nelson in 1965).

    Since 1994, the World Wide Web Consortium (eng. World Wide Web Consortium, W3C), founded and still led by Tim Berners-Lee. This consortium is an organization that develops and implements technological standards for the Internet and the World Wide Web. Mission of the W3C: "Unleash the full potential of the World Wide Web by creating protocols and principles that guarantee the long-term development of the Web." The consortium's other two major goals are to ensure the complete "internationalization of the Web" and to make the Web accessible to people with disabilities.

    The W3C develops uniform principles and standards for the Internet (called "recommendations" W3C Recommendations), which are then implemented by software and hardware manufacturers. Thus, compatibility is achieved between software products and equipment of various companies, which makes the World Wide Web more perfect, versatile and convenient. All recommendations of the World Wide Web Consortium are open, that is, they are not protected by patents and can be implemented by anyone without any financial contributions to the consortium.

    Prospects for the development of the World Wide Web

    Currently, there are two trends in the development of the World Wide Web: the semantic web and the social web.

    • The Semantic Web involves improving the connectivity and relevance of information on the World Wide Web through the introduction of new metadata formats.
    • The Social Web relies on the work of organizing the information available on the Web, performed by the users of the Web themselves. Within the second direction, developments that are part of the Semantic Web are actively used as tools (RSS and other web feed formats, OPML, XHTML microformats). Partially semantized sections of the Wikipedia Category Tree help users consciously navigate in the information space, however, very mild requirements for subcategories do not give reason to hope for an expansion of such sections. In this regard, attempts to compile Knowledge atlases may be of interest.

    There is also the popular concept of Web 2.0, which generalizes several directions of development of the World Wide Web at once.

    Ways to actively display information on the World Wide Web

    Information on the web can be displayed either passively (that is, the user can only read it) or actively - then the user can add information and edit it. Ways to actively display information on the World Wide Web include:

    It should be noted that this division is very conditional. So, say, a blog or a guest book can be considered as a special case of a forum, which, in turn, is a special case of a content management system. Usually the difference is manifested in the purpose, approach and positioning of a particular product.

    Some of the information from websites can also be accessed through speech. India has already begun testing a system that makes the text content of pages accessible even to people who cannot read and write.

    The World Wide Web is sometimes ironically called the Wild Wild Web (wild, wild Web) - by analogy with the title of the movie of the same name Wild Wild West (Wild, Wild West).

    Safety

    For cybercriminals, the World Wide Web has become key way distribution of malicious software. In addition, the concept of network crime includes identity theft, fraud, espionage and illegal collection of information about certain subjects or objects. Web vulnerabilities, according to some reports, now outnumber any traditional manifestation of computer security problems; Google estimates that approximately one in ten pages on the World Wide Web may contain malicious code. According to Sophos, a British manufacturer antivirus solutions, most cyber attacks in the web space are carried out by legitimate ones, located mainly in the USA, China and Russia. The most common type of such attacks, according to the same company, is SQL injection - the malicious introduction of direct queries to the database in text fields on the pages of the resource, which, if the level of security is insufficient, can lead to the disclosure of the contents of the database. Another common threat that exploits the power of HTML and unique resource identifiers to World Wide Web sites is cross-site scripting (XSS), made possible with the introduction of JavaScript technology and gaining momentum with the development of Web 2.0 and Ajax - new standards encouraged the use of interactive scripting. It was estimated in 2008 that up to 70% of all websites in the world were vulnerable to XSS attacks against their users.

    The proposed solutions to the respective problems vary significantly up to complete contradiction to each other. Large security solution providers like McAfee develop products to evaluate information systems for their compliance with certain requirements, other market players (for example, Finjan) recommend conducting active research of the program code and, in general, all content in real time, regardless of the data source. There are also opinions according to which enterprises should perceive security as a good opportunity for business development, and not as a source of expenses; to do this, the hundreds of information security companies today must be replaced by a small group of organizations that would enforce an infrastructural policy of constant and pervasive digital rights management.

    Confidentiality

    Each time a user's computer requests a web page from the server, the server determines and typically logs the IP address from which the request originated. Likewise, most Internet browsers record the pages visited, which can then be viewed in the browser's history, and cache the downloaded content for possible reuse. If an encrypted HTTPS connection is not used when interacting with the server, requests and responses to them are transmitted over the Internet in clear text and can be read, written and viewed at intermediate network nodes.

    When a web page requests and the user provides a certain amount of personal information, such as first and last name or real or email address, the data stream can be deanonymized and associated with a specific person. If the website is using cookies, supports user authentication or other visitor tracking technologies, a relationship can also be established between previous and subsequent visits. Thus, an organization operating on the World Wide Web has the ability to create and update the profile of a specific client using its site (or sites). Such a profile may include, for example, information about leisure and entertainment preferences, consumer interests, occupation, and other demographics. Such profiles are of significant interest to marketers, advertising agencies and other professionals of this kind. Depending on the terms of service of specific services and local laws, such profiles may be sold or transferred to third parties without the knowledge of the user.

    Social networks also contribute to the disclosure of information, offering participants to independently state a certain amount of personal data about themselves. Careless handling of the capabilities of such resources can lead to public access to information that the user would prefer to hide; among other things, such information may become the focus of hooligans or, moreover, cybercriminals. Modern social networks provide their members with a fairly wide range of profile privacy settings, but these settings can be overly complicated - especially for inexperienced users.

    Spreading

    Between 2005 and 2010, the number of web users doubled to reach the one billion mark. According to early studies in 1998 and 1999, most existing websites were not indexed correctly by search engines, and the web itself was larger than expected. As of 2001, more than 550 million web documents were created, most of which, however, were within the invisible web. As of 2002, more than 2 billion web pages were created, 56.4% of all Internet content was in English, followed by German (7.7%), French (5.6%) and Japanese (4.9%). According to research conducted at the end of January 2005, over 11.5 billion web pages were identified in 75 different languages ​​and indexed on the open web. And as of March 2009, the number of pages increased to 25.21 billion. On July 25, 2008, Google software engineers Jesse Alpert and Nissan Hiai announced that Google search engine Search has picked up over a billion unique URLs.

    • In 2011, it was planned to erect a monument to the World Wide Web in St. Petersburg. The composition was supposed to be a street bench in the form of the WWW abbreviation with free access to the Web.

    see also

    • global computer network
    • World Digital Library
    • Global use of the Internet

    Literature

    • Fielding, R.; Gettys, J.; Mogul, J.; Frystick, G.; Mazinter, L.; Leach, P.; Berners-Lee, T. (June 1999). "Hypertext Transfer Protocol - http://1.1" (Information Sciences Institute).
    • Berners-Lee, Tim; Bray, Tim; Connolly, Dan; Cotton, Paul; Fielding, Roy; Jackle, Mario; Lilly, Chris; Mendelsohn, Noah; Orcard, David; Walsh, Norman; Williams, Stewart (December 15, 2004). "Architecture of the World Wide Web, Volume One" (W3C).
    • Polo, Luciano. World Wide Web Technology Architecture: A Conceptual Analysis. New Devices (2003).

    History of creation and development of the Internet.

    The Internet owes its birth to the US Department of Defense and its secret research conducted in 1969 to test methods for allowing computer networks to survive during wartime through dynamic message rerouting. The first such network was ARPAnet, which connected three networks in California with a network in Utah under a set of rules called the Internet Protocol (Internet Protocol, or IP for short).

    In 1972, access was opened for universities and research organizations, as a result of which the network began to unite 50 universities and research organizations that had contracts with the US Department of Defense.

    In 1973, the network grew to an international scale, combining networks located in England and Norway. A decade later, IP was expanded to include a set of communication protocols that supported both LANs and WANs. This is how TCP/IP was born. Shortly thereafter, the National Science Foundation (NSF) launched NSFnet to link the 5 supercomputing centers. Simultaneously with the introduction of the TCP/IP protocol, the new network soon replaced ARPAnet as the backbone of the Internet.

    Well, how did the Internet become so popular and developed, and the impetus for this, as well as for turning it into a business environment, was given by the emergence of the World Wide Web (World Wide Web, WWW, 3W, w-w-w, three doubles) - systems hypertext, which made navigating the Internet fast and intuitive.

    But the idea of ​​linking documents through hypertext was first proposed and promoted by Ted Nelson in the 1960s, but the level of computer technology existing at that time did not allow it to be realized, although who knows how it would have ended if did this idea come to fruition?

    The foundations of what we understand today as the WWW were laid in the 1980s by Tim Berners-Lee in the process of creating a hypertext system at the European Laboratory for Particle Physics (European Laboratory for Particle Physics, European Center for Nuclear Research ).

    As a result of these works, in 1990 the first text browser (browser) was presented to the scientific community, which allows viewing text files linked by hyperlinks (hyperlinks) on-line. This browser was made available to the general public in 1991, but has been slow to spread beyond academia.

    A new historical stage in the development of the Internet is due to the release of the first Unix version of the Mosaic graphical browser in 1993, developed in 1992 by Marc Andreessen, a student who interned at National Center supercomputing applications (National Center for Supercomputing Applications, NCSA), USA.

    Since 1994, after the release of versions of the Mosaic browser for Windows and Macintosh operating systems, and shortly after that, the Netscape Navigator and Microsoft browsers Internet Explorer, originates from the explosion of popularity of the WWW, and as a result of the Internet, among the general public, first in the United States and then around the world.

    In 1995, the NSF handed over responsibility for the Internet to the private sector, and since that time the Internet has existed as we know it today.


    Internet services.

    Services (services) are the types of services that are provided by the servers of the Internet.
    In the history of the Internet, there have been different types services, some of which are no longer in use, others are gradually losing their popularity, while others are flourishing.
    We list those of the services that have not lost their relevance on this moment:
    -World Wide Web - the World Wide Web - a service for searching and viewing hypertext documents, including graphics, sound and video. -E-mail - e-mail - transmission service electronic messages.
    -Usenet, News - teleconferences, newsgroups - a kind of online newspaper or bulletin board.
    -FTP is a file transfer service.
    -ICQ is a service for real-time communication using the keyboard.
    -Telnet is a service for remote access to computers.
    -Gopher - information access service using hierarchical directories.

    Among these services, we can single out services designed for communication, that is, for communication, information transfer (E-mail, ICQ), as well as services whose purpose is to store information and provide users with access to this information.

    Among the latter services, the leading place in terms of the amount of stored information is occupied by the WWW service, since this service is the most convenient for users to work with and the most progressive in technical terms. In second place is the FTP service, because no matter what interfaces and conveniences are developed for the user, information is still stored in files, access to which this service provides. Gopher and Telnet services can now be considered “dying off”, since new information is almost no longer received by the servers of these services, and the number of such servers and their audience is practically not increasing.

    World Wide Web - world wide web

    The World Wide Web (WWW) is a hypertext, or rather, hypermedia information system for searching for Internet resources and accessing them.

    Hypertext is an information structure that allows you to establish semantic links between text elements on a computer screen in such a way that you can easily navigate from one element to another.
    In practice, in hypertext, some words are highlighted by underlining or coloring in a different color. Highlighting a word indicates the presence of a connection of this word with some document in which the topic associated with the highlighted word is considered in more detail.

    Hypermedia is what happens if in the definition of hypertext we replace the word "text" with "any kind of information": sound, graphics, video.
    Such hypermedia links are possible because, along with text information you can link any other binary information, for example, encoded sound or graphics. So, if the program displays a map of the world and if the user selects a continent on this map with the mouse, the program can immediately give graphic, sound and text information about it.

    The WWW system is built on a special data transfer protocol called the HyperText Transfer Protocol (HTTP).
    All contents of the WWW system consist of WWW pages.

    WWW pages are hypermedia documents of the World Wide Web system. They are created using the HTML (Hypertext markup language) hypertext markup language. In fact, one WWW page usually consists of a set of hypermedia documents located on the same server, intertwined with mutual links and related in meaning (for example, containing information about one educational institution or about one museum). Each page document, in turn, can contain multiple screen pages of text and illustrations. Each WWW-page has its own "title page" (eng. "homepage") - a hypermedia document containing links to the main components of the page. Addresses " title pages" are distributed on the Internet as page addresses.

    A set of Web pages linked together by links and designed to achieve a common goal is called a Web site.

    Email.

    E-mail appeared about 30 years ago. Today it is the most massive means of information exchange on the Internet. The ability to receive and send e-mail can be useful not only for communicating with friends from other cities and countries, but also in a business career. For example, when applying for a job, you can quickly send out your resume c via e-mail to various firms. In addition, on many sites where you need to register (on-line games, online stores, etc.) you often need to specify your e-mail. In a word, e-mail is a very useful and convenient thing.

    E-mail (Electronic mail, English mail - mail, abbr. e-mail) is used to send text messages within the Internet, as well as between other networks Email. (Picture 1.)

    Using e-mail, you can send messages, receive them in your email inbox, reply to letters from correspondents, send copies of letters to several recipients at once, forward the received letter to another address, use logical names instead of addresses, create several subsections mailbox for all kinds of correspondence, include various sound and graphic files, as well as binary files - programs.

    To use E-mail, the computer must be connected to the telephone network via a modem.
    A computer connected to the network is considered a potential sender and receiver of packets. Each Internet node, when sending a message to another node, breaks it into packets of a fixed length, usually 1500 bytes. Each packet is supplied with a destination address and a sender address. Packets prepared in this way are sent via communication channels to other nodes. Upon receipt of any packet, the node analyzes the recipient's address and, if it matches its own address, the packet is accepted, otherwise it is sent further. Received packets related to the same message are accumulated. Once all packets of a single message have been received, they are concatenated and delivered to the recipient. Copies of packets are stored on the sending nodes until a response is received from the receiving node about the successful delivery of the message. This ensures reliability. To deliver a letter to the addressee, you only need to know his address and the coordinates of the nearest mailbox. On the way to the addressee, the letter passes through several post offices (nodes).

    FTP service

    Service Internet FTP (file transfer protocol) stands for protocol
    file transfer, but when considering FTP as an Internet service, there is
    mind not just a protocol, but a service - access to files in file
    archives.

    On UNIX systems, FTP is a standard program that runs over the TCP protocol,
    always supplied with the operating system. Its original purpose is
    file transfer between different computers operating in TCP/IP networks: on
    one of the computers runs a server program, on the second the user starts
    client program that connects to the server and sends or receives
    FTP files. (Figure 2)

    Figure 2. FTP protocol scheme

    The FTP protocol is optimized for file transfer. Therefore, FTP programs have become
    part separate service Internet. The FTP server can be configured as
    in such a way that you can connect with him not only under a specific name, but also under
    conditional name anonymous - anonymous. Then the client becomes available not all
    computer file system, but some set of files on the server, which
    makes up the contents of the anonymous ftp server - a public file archive.

    Today, public file archives are organized primarily as servers.
    anonymous ftp. A huge amount of information is available on such servers today.
    and software. Almost everything that can be provided
    to the public as files, available from anonymous ftp servers. This and the programs
    free and demo versions and multimedia, it is finally
    just texts - laws, books, articles, reports.

    Despite the prevalence, FTP has many disadvantages. Programs-
    FTP clients may not always be convenient and easy to use. Not always possible
    understand what kind of file is in front of you - whether this is the file that you are looking for, or not. No
    a simple and versatile search tool on anonymous ftp servers - although for
    This is why there are special programs and services, but they do not always give
    desired results.

    FTP servers can also arrange access to files under a password - for example,
    to their clients.

    TELNET service

    The purpose of the TELNET protocol is to provide a fairly general, bi-directional, eight-bit, byte-oriented means of communication. Its main purpose is to allow terminal devices and terminal processes to communicate with each other. It is contemplated that this protocol can be used for terminal-to-terminal communication ("bonding") or process-to-process communication ("distributed computing").

    Figure 3. Telnet terminal window

    Although a Telnet session has a separate client side and a server side, the protocol is actually completely symmetrical. After establishing a transport connection (usually TCP), both ends of it play the role of "network virtual terminals" (eng. Network Virtual Terminal, NVT) that exchange two types of data:

    Application data (that is, data that goes from the user to text application on the server side and vice versa);

    Telnet protocol commands, a special case of which are options that serve to clarify the capabilities and preferences of the parties (Figure 3).

    Although a Telnet session running over TCP is inherently full duplex, the NVT should be treated as a half-duplex device operating in buffered line mode by default.

    Application data passes through the protocol without changes, that is, at the output of the second virtual terminal, we see exactly what was entered at the input of the first. From the point of view of the protocol, the data is simply a sequence of bytes (octets), by default belonging to the ASCII set, but with the option enabled Binary- any. Although extensions have been proposed to identify the character set, they are not used in practice.

    All application data octet values ​​except \377 (decimal: 255) are passed over the transport as is. The \377 octet is transmitted as a \377\377 sequence of two octets. This is because the \377 octet is used by the transport layer to encode options.

    The protocol provides by default the minimum functionality and a set of options that extend it. The principle of stipulated options requires that negotiations be carried out when each of the options is included. One side initiates the request, and the other side can either accept or reject the offer. If the request is accepted, the option takes effect immediately. Options are described separately from the protocol itself, and their support software arbitrary. The protocol client (network terminal) is instructed to reject requests to include unsupported and unknown options.

    Historically, Telnet served for remote access to the command line interface of operating systems. Subsequently, it began to be used for other text-based interfaces, up to MUD games. Theoretically, even both sides of the protocol can be not only people, but also programs.

    Sometimes telnet clients are used to access other protocols based on the TCP transport, see Telnet and other protocols.

    The telnet protocol is used in the FTP control connection, that is, accessing the server with the telnet ftp.example.net ftp command to perform debugging and experiments is not only possible, but also correct (unlike using telnet clients to access HTTP, IRC and most other protocols ).

    The protocol does not provide for the use of either encryption or data authentication. Therefore, it is vulnerable to any kind of attack to which its transport, i.e. the TCP protocol, is vulnerable. For the functionality of remote access to the system, the SSH network protocol (especially its version 2) is currently used, during the creation of which the emphasis was on security issues. So keep in mind that a Telnet session is pretty insecure unless it's on a fully controlled network or with security on the network. network layer(various implementations of virtual private networks). Due to the unreliability of Telnet as a management tool operating systems abandoned long ago.


    Similar information.


    Scientific and technological progress does not stand still, but is in constant development, search, improvement. Perhaps the most useful invention of the human genius - the Internet, was invented relatively not so long ago, by the standards of the development of civilization. At its core, it is a unique data exchange tool.

    Internet (Network, Internet) is a virtual environment that guarantees access to information resources, the elements of which are personal computers. They are combined into a single scheme and endowed with unique addressing features, using high-speed communication lines with host computers.

    The Internet is a vast network of countless devices. It serves to exchange information that exists in this network in various forms. Nowadays, not only computers can connect to the Internet. Mobile phones, tablets, game consoles, other gadgets and even TVs can easily go online at any time.

    The importance of this information space is undeniable due to the amazing communication opportunities between users of all devices connected to the Web.

    In technical terms, the online space is formed by countless computer devices connected to each other. Billions of PC users living in different countries, daily communicate with each other, transmit and receive useful information, download arrays of digital data in the form of applications, programs, utilities; watching videos, listening to music.

    The online environment has another important property - unlimited possibilities for storing information. Transmitted over the Internet personal experience in addition, it is a unique platform for informing the masses for modern media and a colossal repository of world knowledge.

    What is the Internet?

    In order for PC owners living on different continents to be able to freely use search services network resources- at the bottom of the ocean, trunk cables are laid through which useful information is pumped around the clock.

    The personal computer is controlled by special protocols. This is a kind of instruction that allows you to set the rules for communication between devices. A single criterion for constructing a software protocol is an IP address. Thanks to this structure, any participant receives his own digital address, with the help of which the search and identification takes place.

    For example, after entering the name “novichkam.info” into the browser line, in a matter of moments the client finds himself on a web site offering help to beginners. In technical terms, the software robot simply finds the IP address code that is assigned to a particular site.

    The machine algorithm includes the following operations:

    1. the request is fixed by the main server, where the name of the required thematic data array is stored;
    2. the name of this resource is found in memory, i.e. discovery of the required IP address;
    3. the client goes to the website.

    Other protocols exist, such as HTTP. Requests in another way are carried out with the addition of a prefix http://

    What is the World Wide Web (WWW)

    For most representatives of the target audience, the definition of an Internet service as the World Wide Web in abbreviation (WWW or simply WEB) is of great interest. Its definition refers to a set of interconnected web pages, access to which is provided by a limited number of PCs connected to the Web.

    A set of text files marked up with HTML language with links, posted on the electronic platform, received the name of the website. You can get acquainted with the content of a particular website by activating the browser to search for the address name.

    Web - today is positioned as the most popular and popular service online spaces, i.e. the Internet. An important element WEB are hypertext links. By clicking on the link of the desired document or by requesting a unique URL (name code, path) in the browser, a person can view desired array text.

    Addressing system

    If you enter an incorrect address name in the search bar or follow a broken link, the browser will promptly signal an error (confirm the absence of the desired page). Often, upon request, a person gets access to an advertising (fraudulent) site.

    In this situation, you should correct the inaccuracy in the query string field without attempting to explore the advertising website for security reasons. The fact is that these sites can be infected with a virus. If the resource was created for the purpose of fraud, then it would be useful to familiarize yourself with our section, where the most popular methods of fraud on the network are perfectly described.

    In the address of any site, the main thing is the domain, which serves for ease of remembering. The domain, as a rule, displays the address of the main page. However, it should be understood that for the technical download of the page, the computer device uses IP with the protocol "12.123.45.5". Agree, this combination is much more difficult to remember than the domain name of our site.

    It is important to know that entering "http://" or the "WWW" prefix in search line not required at all. It is better to use the services of a search engine, where the mistake made will be immediately corrected, and the domain can be entered without a zone that causes confusion.

    What does the Internet give us?

    • unlimited communication and communication

    Many are looking for like-minded people here, communicating on popular social projects and forums. Others like the unique personal communication service using ICQ or Skype. Dating website visitors expect to find their other half here;

    • unlimited possibilities of entertainment and organization of personal leisure

    Here you can listen to popular music tracks, enjoy the novelties of the filmography of film studios, play various games, including gambling, get acquainted with the works of modern authors and classics of the literary genre, take surveys, tests, etc.

    • self-education

    In the environment mass communication you can not only read useful articles, but also participate in trainings, master classes, watch video tutorials;

    • creative development of personality

    Here you can meet rare people, visit their professional projects for creative and personal improvement;

    • purchase of goods and services

    Customers of virtual supermarkets can buy goods without leaving their homes. Online you can buy shares of industrial companies, book tickets, book a hotel room, etc.;

    • new ways to make money

    There are more ways to make money on the Internet. For example, you can open an online store by creating own blog(web site). For those who are just trying their hand at this field, it is easier to start with freelancing: writing articles on order, selling photos, offering services for creating and promoting various projects, doing web design and programming.

    • much more. The information on our website will help you learn not only all the possibilities of this global network, but also perfectly while being in it.