(C) Daily Kos
This story was originally published by Daily Kos and is unaltered.
. . . . . . . . . .



Technology 101 – People Alienated by Technology Can Embrace Political Conservatism [1]

['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.']

Date: 2025-02-15

Fear of Change and Technology

Many people fear change and seek to return to a simpler time when they felt more in control of their lives. Some of the most dramatic changes in the last 70 years have come with advances in communications technology. Understanding newer technologies better may help people embrace change and feel less alienated and fearful in everyday life.

I believe that the drift towards social and political conservatism is partly a response to alienation from and fear of the modern, rapidly changing world. Technology has driven such change.

With greater understanding, people can come to see progress as good and desirable in different areas of both culture and technology. They can become more progressive and less attached to some perceived golden age in the past and less fearful of the future.

Human beings are extremely creative and innovative. These qualities should inspire idealism and confidence in the future rather than motivating a retreat into the past.

Note that this a fairly long diary (8 to 10 Pages) which is unusual on KOS.

Explaining Complex Technologies

People use complex technologies every day but often have little understanding of how they work. Systems such as cell phones and computers are designed to allow “users” to interact with them while knowing almost nothing about their underlying mechanisms.

What follows is a brief description of a series of technologies developed mostly in the 2nd half of the 20th century. Almost anyone with a high school education and some computer experience will be able to understand the basics of these technologies.

I learned much of this information in electrical engineering and computer science school and during my years as a computer engineer working for large tech companies. Most of the underlying technologies were invented at AT&T Bell Laboratories whose engineers have been granted over 33,000 US patents.

The high-level technologies are computers and communication systems (telephones, the Internet) but these depend on a whole series of interdependent more basic technologies.

Detailed technical knowledge is NOT required to understand most of these technologies

We will concentrate on explaining the most commonly used and familiar aspects of these technologies such as Email, browsers, digital electronics, microprocessors, telephones, satellites, and the Internet.

The Internet

The internet requires a whole series of underlying technologies. We will explain them one-by-one.

URLs (Universal Resource Locators)

What is a Universal Resource Locator?

A URL looks like the following examples which are entered into Internet browsers to access a file across a network https://theguardian.com or https://www.cnn.com/uk/index.html . The URL is used to locate a file on a remote computer so it can be downloaded (or copied) to a local PC using the Internet. This is required for common web surfing.

When Tim Berners-Lee looked at the file organizations (arrangement of files) on different computer systems (IBM, UNIX, DOS, Apple) in 1989, he noticed that he could address or locate files (or “resources”) on computers running any of these operating systems by giving a domain name (internet address such as www.microsoft.com ) optionally followed by a directory/filename combination (like “/uk/index.html” or a only a filename (like “index.html” in the above example). Locating a file on a remote computer is necessary before it can be downloaded (copied to one’s local PC and formatted for display in an Internet browser’s window).

HTTP (Hypertext Transfer Protocol)

How are Internet files moved across the web after being located on a web server (contains internet files associated with websites) or remote computer?

The Hypertext Transfer Protocol (HTTP) seen at the beginning of most web addresses (i.e., http://www.microsft.com) defines a series of rules that tell how to move a file across the internet from one computer (a source computer or webserver containing a given website) through an intermediary computer (an Internet Service Provider/ISP) to a destination computer. The destination computer (usually a PC or personal computer) is often a computer running an Internet web browser program such as Internet Explorer, Chrome, Safari (on an iPhone), or Edge.

Internet Browsers and HTML

How do Internet Browsers like Edge, Firefox, Safari, and Chrome work during web surfing?

The first browser was a program invented at the University of Illinois by computer scientist Marc Andreessen. It was called Mosaic. Mosaic was adapted and released by Andreessen as Netscape Navigator in 1994 and became wildly popular. The Firefox/Mozilla browser is a later updated version of the Mosaic browser. Microsoft based its Internet Explorer browser on Mosaic’s code in 1995, and it became the standard Internet browser in the late 1990’s. MS’s Edge has taken its place recently.

Browsers are programs that can interpret Hypertext documents or files and format them for display in the browser’s window on a PC. Hypertext files contain a “markup language” called HTML (HyperText Markup Language) that tells the computer how text and pictures should appear in the browser’s window.

Such information as text font, color, position, and size along with pictures i.e. JPEG files and their size and position are defined in the downloaded (copied) HTML file. HTML files also contain hypertext “links” (usually underlined blue-colored text ) and references to files on other websites (often located on other computers on the Internet). These references can download pictures and data from other internet websites for display in the browser’s window. The additional downloads and final assembly of the browser document’s layout is controlled mostly by HTML commands in the original html file. Links (also called Hypertext) on the webpage also allow the user to click on them to go to other Internet websites or to different pages on the same website on the user’s local PC.

Browsers are also “interactive” allowing the user to clink on check boxes or fill in forms on the web. Doing this and submitting the data can send the information back to a website over the Internet thus allowing for two-way communication. Such interaction happens, for example, when the user orders a product on eBay or does banking on the web.

So, the Internet allows for two-way communication between a user and a remote computer (a web server) over a public communications network (the World Wide Web).

Text Editors (History)

Most people use text editors like MS Word to create documents. How did text editors come into being?

Before computers when book editors used pencil and paper to hand edit books to be published, they would write “mark-up” or formatting symbols on paper for the book’s editor and printer. This told the printer or typesetter when new paragraphs began, what text to underline or put in italics, and how to display photographs and chapter titles. This was basic manual text formatting.

The first computer formatting systems were “text processors” that used computer-readable “mark-up” symbols embedded in the text to describe how to format the book or document’s text and pictures. These were the first computerized markup languages. HTML described earlier is a computer-readable “Mark-up” language which is embedded in Internet documents called HTML files. It is also a means of formatting documents, but ones used on web documents rather than locally in a text editor.

Programs such as UNIX’s NROFF (New RunOFF) would accept the marked-up text files as input and would output (or runoff) a printed version of the manuscript properly formatted for publication.

But as computers became faster and memory cheaper, it was possible to create interactive “full screen editors” like Microsoft Word that used “embedded” (invisible to the user) markup symbols in the file to be edited. These invisible symbols formatted the text so the user could see any format change immediately on the computer screen when using the text editor. These editors were called WYSIWYG (what you see is what you get) editors because the computer screen showed the document just as it would appear if you printed it on paper.

In these editors, when the user changes the file by adding text, indenting a paragraph, or putting text in italics, these invisible markup symbols are automatically inserted into the text. The entire file is then immediately reread and reformatted by the text editor based on the newly-added embedded formatting symbols. The entire file is reformatted instantaneously every time the user presses a key on the keyboard when editing the text file.

Later some of the earliest successful programs for Apple computers were phototypesetting software where sophisticated text editors allowed for high quality desktop publishing of manuscripts. This increased an interest in the self-publishing by authors of manuscripts in the 1980’s and beyond.

Communication Across Computer Networks

Communications across the Internet requires complex hardware and software. Computers were much less useful as stand-alone systems before networked systems were developed using the public Internet or other private networks developed by AT&T or IBM.

There is a “network portion” of the overall computer operating system which handles such communication. Computer operating systems like MS Windows or UNIX are complex programs that permit users to interact with computers.

Early operating systems were “command line” based and required users to understand complex commands to communicate with the computer. These evolved into “window-based” machines (mostly PCs) that allowed users to do most tasks from within these windows which made interaction easier for less skilled users. Windowing systems were invented by Xerox at their Palo Alto Research Center and later adopted by Microsoft in their Windows operating system.

There is a seven-layer protocol stack of the OSI (Open Systems Interface) network model which makes up much of a computer’s network operating system. Each layer handles a particular task. The topmost layer is the presentation layer which includes common programs such as Email programs (Outlook or Gmail) or Internet Browser programs (Internet Explorer or Safari). These display data and accept commands in a user-friendly way.

The bottom most layer is the physical layer consisting of network cards (circuit boards) and cables which send and receive electrical signals to/from the network.

The next layer is the datalink layer which allows one computer to talk to the next directly connected device. For instance, a PC can communicate directly with a local router (by Wi-Fi or cable) or a router can communicate directly with an Internet Service Provider over a phone line.

The most important layers handle routing of communications across the network. These layers contain TCP/IP software – Transmission Control Protocol and Internet Protocol software. They are called the Transport layer which handles end-to-end connections between programs over the network and the Network layer which handles the routing of message segments or “packets” of information across the network.

Some may be familiar with an IP Address which defines a particular device or domain on the Internet which is used by these layers. All domain names (such as www.google.com) have a unique corresponding IP address.

The Internet is a “packet switched” network where messages (such as HTML files) are broken up into small packets (i.e., 52 or more characters often with a 20 or 24 character IP routing header) at one end. These message segments are sent over different communication channels via TCP/IP and reassembled at the destination to form a complete message (for example, an HTML file displayed in a browser). The Internet is a self-healing network since if one or more communications links are broken (i.e., underground cables are cut), the messages will still make it to their destinations since they will travel over other physical links (or wires).

This has been an overview of the software components that permit communication over the Internet.

Email and Remote Login

The early internet had a “killer application” used mostly at universities by researchers. It was called Electronic Mail (Email). In the early 1970’s, researchers in the West Coast would leave work at five PM and their computer usage would drop. But East Coast researchers had 3 more hours at work and wanted to use these unused West Coast computer resources. So they wanted to remotely logon to and remotely execute programs on these West Coast computers.

But in order to run their programs on the remote computers, they first needed to transfer their programs (files) to the West Coast. One of the earliest Internet applications was FTP (File Transfer Protocol). This allowed files (and executable programs) to be moved from computer to computer over the internet. This FTP program is still in common use today. FTP is a communications protocol like HTTP described earlier.

But researchers also wanted to communicate efficiently with their colleges across the country. So FTP was adapted into an email system. The creator of email called it a five hour “hack” which means it was a computer program that was quickly thrown together. To run an FTP session and transfer files, all you need to do is type: ftp or < IP address>, and (optionally) provide a login ID and password. Then you can transfer files to/from a remote computer and execute a limited set of commands on that remote computer.

If you can transfer files via FTP, all you have to do is add an Email header containing 1) date/time, 2) a subject line, 3) a “to line”, and 4) a “from line” to create Email much as we use it today. Email did not become common until TCP/IP software was incorporated into the UNIX operating system in 1978. By the early 1980’s, most computer engineers and scientists were using Email to communicate over networks. Email was probably the first “killer” or popular application than made computer networking valuable and desirable for a broad group of people.

Digital Voice and Audio

One the great advances of the 20th century was the development of precise digital to analog conversion methods. The Nyquist-Shannon sampling theorem developed at Bell Laboratories defined how to take an analog sound wave (like a musical recording) and turn it into numbers (make it digital) and then turn it back into a (analog) sound wave (a voice or music) at the far end while maintaining a defined level of quality. This process is part of a mathematical area involving the scientific areas of sampling theory, signal processing, and information theory.

Sound is composed of high and low pressure waves in the air. To turn analog sound waves (such as the pressure waves produced by a piece of music) into a digital data stream of numbers, you must sample (record a numerical value of) the sound wave on a regular basis and store the frequency as a number (usually between 0 and 256) for each sample. The more samples per second you take, the better quality the sound recording and reproduction. Sampling turns a continuous signal (wave) into discrete information (a series of numbers).

For instance the human ear can hear sounds from a low frequency or pitch of 20Hz or cycles per second (for example, very low thunder) to an upper frequency of 20,000 cycles per second (a very high pitch wining sound might be 10,000 Hz). A piano’s highest note vibrates at 4,186Hz (4.1KHz or wave cycles per second) and its lowest note vibrates at 27.5Hz (just above the lower range of human hearing). The higher the number of cycles per second of a sound, the higher the pitch.

So how does digital recording work?

If you want to digitize CD quality music, you should sample at a rate of 44,100 times per second (reduce the sound wave to 44,100 numbers every second). The Nyquist rule says you must sample at 2 times the maximum frequency (highest pitch) of the recorded sound to maintain the quality (fidelity) of the recording. So a CD recording will record music at frequencies up to 22,050Hz. (44,100/2). It can therefore record sounds about 2,050Hz above the range of human hearing (normally 20,000Hz).

If you want to digitize a telephone conversation, you will sample (take numerical values of) a voice signal 8,000 times a second. This will record sound in the range of 0 to 4,000Hz (a low quality recording or level of reproduction). So, telephones will never have the high quality of sound of CD players no matter how good the microphone, amplifier, or speakers are for the telephone conversation.

Digital Multiplexing (Transmitting Many Signals over a Single Wire)

Bell Labs developed a new digital technology in the 1962 called Time Division Multiplexing or PCM (pulse code modulation) which allowed them to define something called the T1 standard for transmission. The goal was to be able to transmit and route tens of thousands of digitalized telephone conversations over a minimal number of physical (usually copper wire) transmission lines (called trunks). Each T1 transmission line was able to transmit (or multiplex) 24 separate voice conversations on a single twisted pair of copper wires. Using technology based on the T1 standard, other transmission standards were defined that could transmit many thousands of digitized conversations over a single COAX (shielded cable) or fiber optic cable (other more modern forms of transmission media).

All of these technologies relied on sampling analog voice signals and creating “digital pipelines” of data that could be sent over transmission lines using T1 (or later T2 and T3) data formats. Once the voice audio data was digitized and given a “timeslot” containing the data for a single conversation, it could be routed to the correct telephone switch where it could be connected to a destination telephone based on the dialed number. This process is called circuit switching since each conversation has a direct connection or circuit that it uses.

These are the technological areas requiring data sampling, analog to digital conversion, digital switching, transmission, call routing, and time division multiplexing. Together, they make up the components of modern digital telephone technology.

A T1 line could carry 1.544 million bits (a number consisting of a 0 or a 1) of data per second. This data stream contains a single second of 24 separate voice conversations with each segment of a conversation carried in 8,000 8-bit time slots each lasting 1/8000th of a second. In T1 transmission, 8 bits x 24 conversations x 8,000 time slots per second = ~1.544 million bit per second data transfer rate.

Older telephone switching systems use this method of circuit switching but more modern ones use packet switching where binary voice data is broken into data packets and sent over the Internet (similar to the Internet HTML files mentioned earlier).

In digital networks, voice data and non-voice data merge as one digital entity with only the destination point (telephone versus Internet browser) differing. Voice data sent over the Internet is called VoIP or voice over IP calling. This is why long distance internet phone calls are usually free since no dedicated phone circuits or resources (involving phone company switches and transmission lines) are used during IP (packet switched) phone calls on the public Internet.

Bell Labs engineers were the first to develop DSP (digital signal processor) chips that were used in the above processes to quickly convert digital signals to analog signals and the reverse. DSPs are still used today in cell phones and in many other electronic applications.

Transistors and Microchips

The Transistor has been credibly described as the most important invention in the history of the human race.

The transistor was developed at Bell Labs in 1947. In the 1950’s, there were portable radios that contained vacuum tubes. These radios were relatively large, heavy, heat-producing, and consumed lots of battery power. The Japanese used transistors to create small, low-power, reliable “transistor radios” in the early 1960’s in some of their first popular commercial applications. This easy availability of music from these inexpensive radios may have powered the Rock and Roll revolution of the 1960’s.

Simple transistors are silicon-based switches permitting current to flow or not flow based on the value of a third electrical input. For instance, If you hook up a battery to a transistor that controls a light, the light turns ON. If you remove the battery, the light turns OFF.

Connecting a series of transistors together using binary logic can add numbers, copy data from one place to another (load or store it), and execute a single line of computer code and accomplish a wide variety of simple tasks. A computer program is made of hundreds and thousands of these simple instructions that executed in the proper sequence can accomplish very complex tasks.

Transistors were fast, reliable, compact, and used little power unlike the vacuum tubes which they gradually replaced in the 1950’s and 1960’s.

Engineers gradually discovered how to put more and more transistors on a silicon wafer, increasing the capabilities of the silicon chip. They created LSI (large scale integration) and VLSI (very large scale integration) chips making them smaller, denser, and smarter as time progressed. They were able to put many thousands of transistors on a tiny chip and connect them to create ICs (integrated circuits). These improved chips lead to the creation of the microprocessor.

Microprocessors

Microprocessors are small computers used to control cars, refrigerators, PCs, and many other electronic devices.

Before microprocessors, a device such as an air conditioner would have an analog controller that would read temperature and humidity from a thermostat and turn on a fan and compressor as required to keep a house at a preset temperature. But if the manufacturer changed the thermostat’s design, a new controller might need to be designed from scratch (expensive) to work with the new thermostat.

The microprocessor, invented in 1972 by Intel Corporation, was programmable. So instead of redesigning the whole control device as applications or hardware changed, only the software (programming) would need to be changed (inexpensive). Microprocessors could perform complex calculations and other activities dynamically as the environment changes creating a new area of programming referred to as real-time systems.

So an automotive engine with a microprocessor could accept dozens of data inputs from the engine (such as for example the level of pollutants in the exhaust gases). It could be programmed to process this input data and then control the fuel-air mixture, the engine timing, the engine temperature, the radiator fan speed, the other functions.

In short, microprocessors such as the Intel 8080 (and simpler, less expensive microprocessors called PLCs) lowered the cost of the design and production of new products, and multiplied the intelligence and sophistication of these products.

Cell Phones

Cellular communications technology was invented at Bell Labs in 1947 and a basic wireless system was first used locally in St. Louis shortly afterwards. The idea was to put a series of transmitters/receivers in a hexagonal grid. These hexagonal areas were called cells. Much later, as a user left one cell and traveled to the adjacent geographical cell, an active phone call would switch from one cell’s transmitter to the next without interrupting the user’s call.

This “cell handoff” mechanism was described further in research papers in 1973 and 1977. One of the first successful public commercial mobile phone networks was the ARP network in Finland, launched in 1971. The first automatic analog cellular systems (designated as 1G) ever deployed were Japan’s NTT system first used in 1979 for car phones in Tokyo (and later the rest of the country of Japan), and the NMT system which was released in the Nordic countries in 1981.

A major problem was that portable phones were too large and required too big a battery. Therefore they could only be installed in cars which could handle the weight of the battery and electronics. But in 1983, the first portable cell phone, the DynaTAC 8000X mobile phone, was developed by Ameritech and launched on the first US 1G network. It cost $100M to develop, and took over a decade to reach the market.

The phone had a talk time of just thirty minutes and took ten hours to charge. Consumer demand was strong despite the battery life, weight, and low talk time, and waiting lists were in the thousands. In 1991 the first (2G) GSM network (Radiolinja) launched in Finland. This second generation technology introduced a new variant of communication called SMS or text messaging. Newer 3G networks were first launched in 2001 and allowed for faster download rates and also allowed users to surf the Internet using browsers on their phones.

Communications Satellites

Bell Labs in cooperation with NASA and the French developed the first communications satellite in 1963. It was called TELSTAR. It relayed video and voice information across the globe but with a slight delay based on its orbiting distance from the earth and the limitations of the speed of light. Satellites relay information based on an algorithm developed for them. It is called CSMA/CD or Carrier Sense Multiple Access with Collision Detection. Communications Satellites are like mirrors – they accept electronic data signals from earth stations and reflect (or retransmit) them back to earth over a wide “footprint” which can span thousands of miles.

In satellite transmission, an earth station sends a television or audio signal to the satellite in bursts of digital data (radio waves). The satellite receives the data and if it is identified by the satellite as data that it should process, it rebroadcasts it back to the earth station and other receivers. If the received data is not garbled, the sender assumes the destination receivers also got the uncorrupted signal, and it waits to receive the next message. So this is the way the signal is relayed via satellite to TV stations, telephone companies, or to your personal TV via your home rooftop dish (for satellite TV).

The Collision Detection concept means that if two or more earth stations send data for that satellite to process at the same time, the data may be corrupted (a “collision” occurs). If the sending station does not receive an accurate reflected signal back, it waits a random period of time and rebroadcasts its original message. This collision avoidance method works well if the system does not get overloaded with too many messages that cause multiple “collisions” to occur. Then, the transmission system breaks down and most of the transmissions fail.

It is also significant that Bell Labs invented, developed, and deployed the first silicon solar cells to produce electricity for the TELSTAR satellite. The solar energy business of today can be traced to this Bell Labs research and development effort.

There are many other areas of technology not covered here such as computer graphics (for art and entertainment), programming languages, compilers, databases (Oracle is a large commercial database for storing and organizing information), computer architecture, algorithms (such as sorting, searching, and hashing methods), voice recognition, encryption techniques, and transmission methods (fiber optical cables or microwave transmission). More recent areas of exploration include Artificial Intelligence (AI) and Quantum Computing.

But the above list covers a good sampling of the technologies most people use.

Thanks for reading. I hope you enjoyed getting a quick overview of some recent technological innovations so you can understand the revolution in technology and feel more at home in and more optimistic about the the modern world.

[END]
---
[1] Url: https://www.dailykos.com/stories/2025/2/15/2303936/-Technology-101-People-Alienated-by-Technology-Can-Embrace-Political-Conservatism?pm_campaign=front_page&pm_source=more_community&pm_medium=web

Published and (C) by Daily Kos
Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified.

via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/dailykos/