The Linux Guide Online

Chapter 02 - Networking

Part One - History and Introduction

2.1 History

The idea of networking is probably as old as telecommunications itself. Consider people living in the stone age, where drums may have been used to transmit messages between individuals. Suppose caveman A wants to invite caveman B for a game of hurling rocks at each other, but they live too far apart for B to hear A banging his drum. So what are A's options? He could 1)-walk over to B's place, 2)-get a bigger drum, or 3)-ask C, who lives halfway between them, to forward the message. The last is called networking.

Of course, we have come a long way from the primitive pursuits and devices of our forebears. Nowadays, we have computers talk to each other over vast assemblages of wires, fiber optics, microwaves, and the like, to make an appointment for Saturday's cricket match. In the following, we will deal with the means and ways by which this is accomplished, but leave out the wires, as well as the soccer part.

Linux users are exposed to mainly two kinds of networks: those based on UUCP and those based on TCP/IP. These are protocol suites and software packages that supply means to transport data between two computers. We shall briefly describe the UUCP but will focus mainly on the TCP/IP based networks.

We define a network as a collection of hosts that are able to communicate with each other, often by relying on the services of a number of dedicated hosts that relay data between the participants. Hosts are very often computers, but need not be; one can also think of X-terminals or intelligent printers as hosts. Small agglomerations of hosts are also called sites.

Communication is impossible without some sort of language or code. In computer networks, these languages are collectively referred to as protocols. However, you shouldn't think of written protocols here, but rather of the highly formalized code of behavior observed when heads of state meet, for instance. In a very similar fashion, the protocols used in computer networks are nothing but very strict rules for the exchange of messages between two or more hosts.

2.2 UUCP Networks

UUCP is an abbreviation for Unix-to-Unix Copy. It started out as a package of programs to transfer files over serial lines, schedule those transfers, and initiate execution of programs on remote sites. It has undergone major changes since its first implementation in the late seventies, but is still rather Spartan in the services it offers. Its main application is still in wide-area networks based on dial-up telephone links.
UUCP was first developed by Bell Laboratories in 1977 for communication between their Unix-development sites. In mid-1978, this network already connected over 80-sites. It was running email as an application, as well as remote printing. However, the system's central use was in distributing new software and bugfixes. Today, UUCP is not confined to the environment anymore. There are both free and commercial ports available for a variety of platforms, including AmigaOS, DOS, Atari's TOS, etc.

One of the main disadvantages of UUCP networks is their low bandwidth. On one hand, telephone equipment places a tight limit on the maximum transfer rate. On the other hand, UUCP links are rarely permanent connections; instead, hosts rather dial up each other at regular intervals. Hence, most of the time it takes a mail message to travel a UUCP network it sits idly on some host's disk, awaiting the next time a connection is established.

Despite these limitations, there are still many UUCP networks operating all over the world, run mainly by hobbyists, which offer private users network access at reasonable prices. The main reason for the popularity of UUCP is that it is dirt cheap compared to having your computer connected to The Big Internet Cable. To make your computer a UUCP node, all you need is a modem, a working UUCP implementation, and another UUCP node that is willing to feed you mail and news.

2.3 TCP/IP Networks

Although UUCP may be a reasonable choice for low-cost dial-up network links, there are many situations in which its store-and-forward technique proves too inflexible, for example in Local Area Networks (LANs). These are usually made up of a small number of machines located in the same building, or even on the same floor that are interconnected to provide a homogeneous working environment. Typically, you would want to share files between these hosts, or run distributed applications on different machines.

These tasks require a completely different approach to networking. Instead of forwarding entire files along with a job description, all data is broken up in smaller chunks (packets), which are forwarded immediately to the destination host, where they are reassembled. This type of network is called a packet-switched network. Among other things, this allows to run interactive applications over the network. The cost of this is, of course, a greatly increased complexity in software.

The solution that many have adopted is known as TCP/IP. In this section, we will have a look at its underlying concepts.

TCP/IP traces its origins to a research project funded by the United States DARPA (Defense Advanced Research Projects Agency) in 1969. This was an experimental network, the ARPANET, which was converted into an operational one in 1975, after it had proven to be a success.

In 1983, the new protocol suite TCP/IP was adopted as a standard, and all hosts on the network were required to use it. When ARPANET finally grew into the Internet (with ARPANET itself passing out of existence in 1990), the use of TCP/IP had spread to networks beyond the Internet itself. Most notable are local area networks, but in the advent of fast digital telephone equipment, such as ISDN, it also has a promising future as a transport for dial-up networks.

TCP/IP is the protocol on which the entire network depends. If, for example, you are connected to a network and are using the rlogin command to log into a remote machine, you are using the TCP/IP protocol. Similarly using the NFS or the NIS also means that you are using the TCP/IP.

We will now have a closer look at the way TCP/IP works. You will need this to understand how and why you have to configure your machine. We will start by examining the hardware, and slowly work our way up.

2.3.1 Ethernets

The type of hardware most widely used throughout LANs is what is commonly known as Ethernet. It consists of a single cable with hosts being attached to it through connectors, taps or transceivers. Simple Ethernets are quite inexpensive to install, which, together with a net transfer rate of 10 Megabits per second accounts for much of its popularity.

Ethernets come in three flavors, called thick and thin, respectively, and twisted pair. Most people prefer thin Ethernet, because it is very cheap: PC cards come for as little as US$50, and cable is in the range of a few cent per meter. However, for large-scale installations, thick Ethernet is more appropriate.

One of the drawbacks of Ethernet technology is its limited cable length, which precludes any use of it other than for LANs. However, several Ethernet segments may be linked to each other using repeaters, bridges or routers. Repeaters simply copy the signals between two or more segments, so that all segments together will act as if it was one Ethernet. Timing requirements, there may not be more than four repeaters any two hosts on the network. Bridges and Routers are more sophisticated. They analyze incoming data and forward it only when the recipient host is not on the local Ethernet.

There are also other kinds of hardware that allow more large-scale networks. There is also radio and ham based networking. Other techniques involve using slow but cheap serial lines for dial-up access. These require yet another protocol for transmission of packets, such as SLIP or PPP.

2.3.2 The Internet Protocol

Of course, you wouldn't want your networking to be limited to one Ethernet. Ideally, you would want to be able to use a network regardless of what hardware it runs on and how many subunits it is made up of. This scheme of directing data to a remote host is called routing, and packets are often referred to as datagrams in this context. To facilitate things, datagram exchange is governed by a single protocol that is independent of the hardware used: IP, or Internet Protocol.

The main benefit of IP is that it turns physically dissimilar networks into one apparently homogeneous network. This is called internetworking, and the resulting ``meta-network'' is called an internet. Note the subtle difference between an internet and the Internet here. The latter is the official name of one particular global internet.

Of course, IP also requires a hardware-independent addressing scheme. This is achieved by assigning each host a unique 32-bit number, called the IP-address. An IP-address is usually written as four decimal numbers, one for each 8-bit portion, separated by dots. For example, quark might have an IP-address of 0x954C0C04, which would be written as This format is also called dotted quad notation.

You will notice that we now have three different types of addressing schemes: first there is the host's name, then there are IP-addresses, and finally, there are hardware addresses, like the 6-byte Ethernet address. All these somehow have to match, so that when you try to access a remote machine by any of the names you must land up at the correct address. The network somehow has to find out what Ethernet address corresponds to the IP-address. Which is rather confusing. For now, it's enough to remember that these steps of finding addresses are called hostname resolution, for mapping host names onto IP-addresses, and address resolution, for mapping the latter to hardware addresses.

2.3.3 The Transmission Control Protocol

Now, of course, sending datagrams from one host to another is not the whole story. If you log into a remote machine, you want to have a reliable connection between your rlogin process on your client and the host shell process. Thus, the information sent to and fro must be split up into packets by the sender, and reassembled into a character stream by the receiver. Trivial as it seems, this involves a number of hairy tasks.

A very important thing to know about IP is that, by intent, it is not reliable. Assume that ten people on your Ethernet started downloading the latest release of XFree86 from GMU's FTP server. The amount of traffic generated by this might be too much for the gateway to handle, because it's too slow, and it's tight on memory. Now if you happen to send a packet to quark, sophus might just be out of buffer space for a moment and therefore unable to forward it. IP solves this problem by simply discarding it. The packet is irrevocably lost. It is therefore the responsibility of the communicating hosts to check the integrity and completeness of the data, and retransmit it in case of an error.

This is performed by yet another protocol, TCP, or Transmission Control Protocol, which builds a reliable service on top of IP. The essential property of TCP is that it uses IP to give you the illusion of a simple connection between the two processes on your host and the remote machine, so that you don't have to care about how and along which route your data actually travels. A TCP connection works essentially like a two-way pipe that both processes may write to and read from. Think of it as a telephone conversation.

TCP identifies the end points of such a connection by the IP-addresses of the two hosts involved, and the number of a so-called port on each host. Ports may be viewed as attachment points for network connections. If we are to strain the telephone example a little more, one might compare IP-addresses to area codes (numbers map to cities), and port numbers to local codes (numbers map to individual people's telephones).

For example when you use rlogin for remote logging, the client application (rlogin) opens a port on the client, and connects to port 513 on server, which the rlogind (the daemon on the server side) server is known to listen to. This establishes a TCP connection. Using this connection, rlogind performs the authorization procedure, and then spawns the shell. The shell's standard input and output are redirected to the TCP connection, so that anything you type to rlogin on your machine will be passed through the TCP stream and be given to the shell as standard input.

More on Ports

Ports may be viewed as attachment points for network connections. If an application wants to offer a certain service, it attaches itself to a port and waits for clients (this is also called listening on the port). A client that wants to use this service allocates a port on its local host, and connects to the server's port on the remote host.

An important property of ports is that once a connection has been established between the client and the server, another copy of the server may attach to the server port and listen for more clients. This permits, for instance, several concurrent remote logins to the same host, all using the same port 513. TCP is able to tell these connections from each other, because they all come from different ports or hosts. For example, if you twice log into your server from the same client machine, then the first rlogin client will use the local port 1023, and the second one will use port 1022. Both however, will connect to the same port 513 on quark.

This example shows the use of ports as rendezvous points, where a client contacts a specific port to obtain a specific service. In order for a client to know the proper port number, an agreement has to be reached between the administrators of both systems on the assignment of these numbers. For services that are widely used, such as rlogin, these numbers have to be administered centrally. This is done by the IETF (or Internet Engineering Task Force), which regularly releases an RFC titled Assigned Numbers. It describes, among other things, the port numbers assigned to well-known services. Linux uses a file mapping service names to numbers, called /etc/services.

2.4 Maintaining Your System

Throughout this book, we will mainly deal with installation and configuration issues. Administration is, however, much more than just that. After setting up a service, you have to keep it running, too. For most of them, only little attendance will be necessary, while some, like mail and news, require that you perform routine tasks to keep your system up-to-date. We will discuss these tasks in later chapters.

The absolute minimum in maintenance is to check system and per-application log files regularly for error conditions and unusual events. Commonly, you will want to do this by writing a couple of administrative shell scripts and run them from cron periodically.

2.4.1 System Security

Another very important aspect of system administration in a network environment is protecting your system and users from intruders. Carelessly managed systems offer malicious people many targets: attacks range from password guessing to Ethernet snooping, and the damage caused may range from faked mail messages to data loss or violation of your users' privacy. We will mention some particular problems when discussing the context they may occur in, and some common defenses against them.

When making a service accessible to the network, make sure to give it ``least privilege,'' meaning that you don't permit it to do things that aren't required for it to work as designed. For example, you should make programs setuid to root or some other privileged account only when they really need this. Also, if you want to use a service for only a very limited application, don't hesitate to configure it as restrictively as your special application allows. For instance, if you want to allow diskless hosts to boot from your machine, you must provide the TFTP (trivial file transfer service) so that they can download basic configuration files from the /boot directory. However, when used unrestricted, TFTP allows any user anywhere in the world to download any world-readable file from your system. If this is not what you want, why not restrict TFTP service to the /boot directory?

Part Two - Issues of TCP/IP Networking

2.5 Networking Interfaces

To hide the diversity of equipment that may be used in a networking environment, TCP/IP defines an abstract interface through which the hardware is accessed. This interface offers a set of operations, which is the same for all types of hardware and basically deals with sending and receiving packets.

For each peripheral device you want to use for networking, a corresponding interface has to be present in the kernel. For example, Ethernet interfaces in are called eth0 and eth1, and SLIP interfaces come as sl0, sl1, etc. These interface names are used for configuration purposes when you want to name a particular physical device to the kernel. They have no meaning beyond that.

2.6 IP Addresses

As mentioned in the previous chapter, the addresses understood by the IP-networking protocol are 32-bit numbers. Every machine must be assigned a number unique to the networking environment. If you are running a local network that does not have TCP/IP traffic with other networks, you may assign these numbers according to your personal preferences. However, for sites on the Internet, a central authority, the Network Information Center, or NIC assigns numbers.

For easier reading, IP addresses are split up into four 8-bit numbers called octets. For example, a machine could have an IP-address of 0x954C0C04, which is written as This format is often referred to as the dotted quad notation.

Another reason for this notation is that IP-addresses are split into a network number, which is contained in the leading octets, and a host number, which is the remainder. When applying to the NIC for IP-addresses, you are not assigned an address for each single host you plan to use. Instead, you are given a network number, and are allowed to assign all valid IP-addresses within this range to hosts on your network according to your preferences.

Depending on the size of the network, the host part may need to be smaller or larger. To accommodate different needs, there are several classes of networks, defining different splits of IP-addresses.

Class A: Class A comprises networks through The
network number is contained in the first octet. This provides
for a 24 bit host part, allowing roughly 1.6 million hosts.

Class B: Class B contains networks through; the
network number is in the first two octets. This allows for
16320 nets with 65024 hosts each.

Class C: Class C networks range from through,
with the network number being contained in the first three
octets. This allows for nearly 2 million networks with up to
254 hosts.

Classes D, E, and F: Addresses falling into the range of
through are either experimental, or are reserved for
future use and don't specify any network.

If we go back to the address above, we find that, refers to host 12.4 on the class-B network

You may have noticed that in the above list not all possible values were allowed for each octet in the host part. This is because host numbers with octets all 0 or all 255 are reserved for special purposes. An address where all host part bits are zero refers to the network, and one where all bits of the host part are 1 is called a broadcast address. This refers to all hosts on the specified network simultaneously. Thus, is not a valid host address, but refers to all hosts on network

There are also two network addresses that are reserved, and The first is called the default route, the latter the loopback address. Network is reserved for IP traffic local to your host. Usually, address will be assigned to a special interface on your host, the so-called loopback interface, which acts like a closed circuit. Any IP packet handed to it from TCP or UDP will be returned to them as if it had just arrived from some network. This allows you to develop and test networking software without ever using a ``real'' network. Another useful application is when you want to use networking software on a standalone host. This may not be as uncommon as it sounds; for instance, many UUCP sites don't have IP connectivity at all, but still want to run the INN news system nevertheless. For proper operation, INN requires the loopback interface.

2.7 IP Routing

2.7.1 IP Networks

When you write a letter to someone, you usually put a complete address on the envelope, specifying the country, state, zip code, etc. After you put it into the letter box, the postal service will deliver it to its destination: it will be sent to the country indicated, whose national service will dispatch it to the proper state and region, etc. The advantage of this hierarchical scheme is rather obvious: Wherever you post the letter, the local postmaster will know roughly the direction to forward the letter to, but doesn't have to care which way the letter will travel by within the destination country.

IP-networks are structured in a similar way. The whole Internet consists of a number of proper networks, called autonomous systems. Each such system performs any routing between its member hosts internally, so that the task of delivering a datagram is reduced to finding a path to the destination host's network. This means, as soon as the datagram is handed to any host that is on that particular network, further processing is done exclusively by the network itself.

2.7.2 Subnetworks

This structure is reflected by splitting IP-addresses into a host and network part, as explained above. By default, the destination network is derived from the network part of the IP-address. Thus, hosts with identical IP-network numbers should be found within the same network, and vice versa.

It makes sense to offer a similar scheme inside the network, too, since it may consist of a collection of hundreds of smaller networks itself, with the smallest units being physical networks like Ethernets. Therefore, IP allows you to subdivide an IP-network into several subnets.

A subnet takes over responsibility for delivering datagrams to a certain range of IP-addresses from the IP-network it is part of. As with classes A, B, or C, it is identified by the network part of the IP-addresses. However, the network part is now extended to include some bits from the host part. The number of bits that are interpreted as the subnet number is given by the so-called subnet mask, or netmask. This is a 32-bit number, too, which specifies the bit mask for the network part of the IP-address.

2.7.3 The Domain Name System Hostname Resolution

As described above, addressing in TCP/IP networking revolves around 32-bit numbers. However, you will have a hard time remembering more than a few of these. Therefore, hosts are generally known by ``ordinary'' names such as gauss or strange. It is then the application's duty to find the IP-address corresponding to this name. This process is called host name resolution.

An application that wants to find the IP-address of a given host name does not have to provide its own routines for looking up a hosts and IP-addresses. Instead, it relies on number of library functions that do this transparently, called gethostbyname(3) and gethostbyaddr(3). Traditionally, these and a number of related procedures were grouped in a separate library called the resolver library; on , these are part of the standard libc. Colloquially, this collection of functions are therefore referred to as ``the resolver''.

Now, on a small network like an Ethernet, or even a cluster of them, it is not very difficult to maintain tables mapping host names to addresses. This information is usually kept in a file named /etc/hosts. When adding or removing hosts, or reassigning addresses, all you have to do is update the hosts on all hosts. Quite obviously, this will become burdensome with networks than comprise more than a handful of machines. This is why, in 1984, a new name resolution scheme has been adopted, the Domain Name System. DNS was designed by Paul Mockapetris, and addresses both problems simultaneously. DNS

DNS organizes host names in a hierarchy of domains. A domain is a collection of sites that are related in some sense - be it because they form a proper network (e.g. all machines on a campus, or all hosts on BITNET), because they all belong to a certain organization (like the U.S. government), or because they're simply geographically close. For instance, universities are grouped in the edu domain (.edu extension to the fully qualified names), with each University or College using a separate subdomain below which their hosts are subsumed. The complete name of a host in this way using all the hierarchy of domains is called the fully qualified domain name. This name uniquely identifies a machine to the Internet.

Now, organizing the name space in a hierarchy of domain names nicely solves the problem of name uniqueness; with DNS, a host name has to be unique only within its domain to give it a name different from all other hosts world-wide. Furthermore, fully qualified names are quite easy to remember. Taken by themselves, these are already very good reasons to split up a large domain into several subdomains.

But DNS does even more for you than than this: it allows you to delegate authority over a subdomain to its administrators. For example, the maintainers at any top-level domain may use the assigned IP addresses in any fashion they like and name the machines according to their own convention. It however suffices only for the DNS of that administrator is updated. All the changes then automatically propagate through the rest of the world about any new host name that is setup. To this end, the name space is split up into zones, each rooted at a domain. Note the subtle difference between a zone and a domain: the domain encompasses all hosts at the University, while the zone includes only the hosts that are managed by the Computing Center directly, for example those at the Mathematics Department. The hosts at the Physics Department belong to a different zone, namely Name Lookups with DNS

At first glance, all this domain and zone fuss seems to make name resolution an awfully complicated business. After all, if no central authority controls what names are assigned to which hosts, then how is a humble application supposed to know?!
Now comes the really ingenuous part about DNS. If you want to find out the IP-address of erdos, then, DNS says, go ask the people that manage it, and they will tell you.

In fact, DNS is a giant distributed database. It is implemented by means of so-called name servers that supply information on a given domain or set of domains. For each zone, there are at least two, at most a few, name servers that hold all authoritative information on hosts in that zone. When your application wants to look up information on a name, it contacts a local name server, which conducts a so-called iterative query for it. It starts off by sending a query to a name server for the root domain. The root name server recognizes that this name does not belong to its zone of authority, but rather to one below the edu domain. Thus, it tells you to contact an edu zone name server for more information, and encloses a list of all edu name servers along with their addresses. Your local name server will then go on and query one of those, for instance In a manner similar to the root name server, knows that the people run a zone of their own, and point you to their servers. The local name server will then present its query for www to one of these, which will finally recognize the name as belonging to its zone, and return the corresponding IP-address.

Now, this looks like a lot of traffic being generated for looking up a measly IP-address, but it's really only miniscule compared to the amount of data that would have to be transferred if we were still stuck with HOSTS.TXT. But there's still room for improvement with this scheme.

To improve response time during future queries, the name server will store the information obtained in its local cache. So the next time anyone on your local network wants to look up the address of a host in the domain, your name server will not have to go through the whole process again, but will rather go to the name server directly.

Of course, the name server will not keep this information forever, but rather discard it after some period. This expiry interval is called the time to live, or TTL. Administrators of the responsible zone assign such a TTL each datum in the DNS database. Domain Name Servers

Name servers that hold all information on hosts within a zone are called authoritative for this zone, and are sometimes referred to as master name servers. Any query for a host within this zone will finally wind down at one of these master name servers.
To provide a coherent picture of a zone, its master servers must be fairly well synchronized. This is achieved by making one of them the primary server, which loads its zone information from data files, and making the others secondary servers who transfer the zone data from the primary server at regular intervals.

One reason to have several name servers is to distribute work load, another is redundancy. When one name server machine fails in a benign way, like crashing or losing its network connection, all queries will fall back to the other servers. Of course, this scheme doesn't protect you from server malfunctions that produce wrong replies to all DNS requests, e.g. from software bugs in the server program itself.

Of course, you can also think of running a name server that is not authoritative for any domain. This type of server is useful nevertheless, as it is still able to conduct DNS queries for the applications running on the local network, and cache the information. It is therefore called a caching-only server.

Beside looking up the IP-address belonging to a host, it is sometimes desirable to find out the canonical host name corresponding to an address. This is called reverse mapping and is used by several network services to verify a client's identity. When using a single hosts file, reverse lookups simply involve searching the file for a host that owns the IP-address in question. With DNS, an exhaustive search of the name space is out of the question, of course. Instead, a special domain,, has been created which contains the IP-addresses of all hosts in a reverted dotted-quad notation. For instance, an IP-address of corresponds to the name

2.8 Devices, Drivers, and all that

Up to now, we've been talking quite a bit about network interfaces and general TCP/IP issues, but didn't really cover exactly what happens when ``the networking code'' in the kernel accesses a piece of hardware. For this, we have to talk a little about the concept of interfaces and drivers.

First, of course, there's the hardware itself, for example an Ethernet board: this is a slice of Epoxy, cluttered with lots of tiny chips with silly numbers on them, sitting in a slot of your PC. This is what we generally call a device.

For you to be able to use the Ethernet board, special functions have to be present in your kernel that understand the particular way this device is accessed. These are the so-called device drivers. For example, has device drivers for several brands of Ethernet boards that are very similar in function. They are known as the ``Becker Series Drivers'', named after their author, Donald Becker. A different example is the D-Link driver that handles a D-Link pocket adaptor attached to a parallel port.

But, what do we mean when we say a driver ``handles'' a device? Let's go back to that Ethernet board we examined above. The driver has to be able to communicate with the peripheral's on-board logic somehow: it has to send commands and data to the board, while the board should deliver any data received to the driver.

In PCs, this communication takes place through an area of I/O-memory that is mapped to on-board registers and the like. All commands and data the kernel sends to the board have to go through these registers. I/O memory is generally described by giving its starting or base address. Typical base addresses for Ethernet boards are 0x300, or 0x360.

Usually, you don't have to worry about any hardware issues such as the base address, because the kernel makes an attempt at boot time to detect a board's location. This is called autoprobing, which means that the kernel reads several memory locations and compares the data read with what it should see if a certain Ethernet board was installed. However, there may be Ethernet boards it cannot detect automatically; this is sometimes the case with cheap Ethernet cards that are not-quite clones of standard boards from other manufacturers. Also, the kernel will attempt to detect only one Ethernet device when booting. If you're using more than one board, you have to tell the kernel about this board explicitly.

Another such parameter that you might have to tell the kernel about is the interrupt request channel. Hardware components usually interrupt the kernel when they need care taken of them, e.g. when data has arrived, or a special condition occurs. In a PC, interrupts may occur on one of 15 interrupt channels numbered 0, 1, and 3 through 15. The interrupt number assigned to a hardware component is called its interrupt request number, or IRQ.

As described in chapter-, the kernel accesses a device through a so-called interface. Interfaces offer an abstract set of functions that is the same across all types of hardware, such as sending or receiving a datagram.

Interfaces are identified by means of names. These are names defined internally in the kernel, and are not device files in the /dev directory. Typical names are eth0, eth1, etc, for Ethernet interfaces. The assignment of interfaces to devices usually depends on the order in which devices are configured; for instance the first Ethernet board installed will become eth0, the next will be eth1, and so on. One exception from this rule are SLIP interfaces, which are assigned dynamically; that is, whenever a SLIP connection is established, an interface is assigned to the serial port.