Originally Posted by Granz
UH OH, my inner teacher just reared up and seized the podium.
RS-232 - This is a protocol (a protocol is just a set of rules that everyone agrees to follow to get a job done - in this case everyone agrees to follow these rules to transmit data) to allow data to be transmitted over a serial line. Data can be sent across wires in either serial, or parallel, format. In parallel, all of the bits in a piece of data would be sent at the same time across multiple wires (a separate wire for each bit of data,) in the case of our ASCII code, you would need 7 bits (bit [Binary digIT] - the smallest amount of data that can be stored, it can be either a zero or a one - look up binary numbering system for more information) of data to transmit, or store, a single alphabetic character or a numeral. This would mean that you must have, at least, 7 physical wires to transmit one character in parallel. On the other hand, serial communication allows a piece of data to be sent, bit by bit, across a single wire. Nice savings over having to pay for 8 (or more) wires for each data cable. Without a protocol to define which bit you send first, second, third, etc. you would never be able to determine what data was being sent. The RS-232 protocol defines stuff like that (among many other things, such as the voltages used, etc.) RS-422 and RS-485 are other protocols for serial data transmission.
In any of those protocols, data can be transmitted either one direction at a time (simplex) or both directions at a time (duplex.) In some of the older circuits, there was all of the wiring needed to handle data transmission in both directions, but the protocol (or rules) allowed data to be sent only one direction at a time. This was similar to the telephone system that you use to call your friends; you can both talk at the same time, but neither one of you would get much out of the conversation. This was considered partial duplex; the capability for duplex was there, but not used simultaneously. Newer computers have the ability to both talk and listen at exactly the same time - this would be the full-duplex, that you mentioned. Simple, and full, duplex would be a sub-protocol of the full RS-232 (or RS-422, or RS-485) protocol that you use for your communication.
Ok, Teach just gave the podium back to Experimenter.
UH-OH! MY inner teacher just reared up and seized the podium.
RS-232 is an electrical specification, not a protocol.
Originally posted on the Savage///Circuits Forums on August 19th, 2013, 12:44 PM by Chris Savage
Serial Communication Explained
This post is long overdue. With my history of technical support and electronics / microcontroller design I often hear the term RS-232 used synonymously with serial communication. The fact is, they're not interchangeable. RS-232 refers to an electrical specification for the transmission of serial data whereas serial communication defines the protocol. Serial communication does not imply RS-232 as the interface specification. You could use RS-232 (Single-ended signaling) for serial communication, but you could just as easily use RS-422 or RS-485 (Differential signaling). And many devices communicate serially at logic level, which could be TTL or CMOS compatible (also single-ended). Note that another important thing to know here is that an RS-232 driver inverts the data, while logic level serial information remains non-inverted (true).
I think since RS-232 was the primary interface on early PC computers which had multiple RS-232 serial ports that we tend to think of serial data as RS-232. My biggest issue with this is that most serial data is simply not RS-232 level signals anymore, but some other specification, be it TTL, CMOS, RS-422, RS-485, USB...yes, that's right. USB replaces our older RS-232 technology on modern computers, which now also have serial hard drive interfaces such a Serial-ATA, or SATA. The big difference with USB and SATA is that their specification includes not only the electrical interface specification, but the communication protocol as well.
While we're on the subject of communication protocol let's discuss the difference between asynchronous serial and synchronous serial communication. Once again the electrical interface specifications above are not specific to either form of serial communication. In asynchronous serial the sender uses a start signal to signify the beginning of the transmission, which is followed by some data bits and finally a stop signal. Both sides must agree on certain parameters for successful communication, such as baud-rate (bits-per-second), bits-per-character, order the bits are sent, parity, etc. If the two sides are configured differently, coherent data communication will not be possible.
In synchronous serial communication, such as SPI (Serial Peripheral Interface) and I2C, there is a separate clock signal for each bit that is communicated. Both sides must still agree on the protocol and while SPI and I2C are both synchronous serial communication, each has its own protocol. And while they can use various electrical interfaces to communicate, I2C is an open-collector/drain interface, meaning that it does not typically drive the signal line high, but instead only low, allowing the line to be pulled high by a resistor (typically 4.7K). I2C has the advantage that multiple devices can occupy the same two wires, SDA and SCL. These devices operate in a Master/Slave communication mode where the master is typically in control of the SCL line.
SPI devices come in a few variations. Some devices have a single data line (DIO), clock line (SCK) and chip select line (CS). However some devices follow the Motorola Specification more closely having a data out line (MISO), a data in line (MOSI), a synchronous clock line (SCLK) and a slave select line (SS). Like I2C, these devices operate in a Master/Slave fashion with the master being in control of the clock and select lines. Unlike I2C devices SPI devices cannot share the same I/O lines. They can often share clock and data lines however, meaning only one additional I/O line is typically required for each SPI device added to the system, rather than three or four.
Both asynchronous and synchronous serial devices have communication speed limitations imposed by their electrical interface specification. For asynchronous devices this varies greatly with many devices these days supporting several Gbps, where we originally started at 300 bps and had a typical max rate of 115.2 Kbps on our PC serial port. SPI devices have speed ratings in their datasheets, usually based on the max clock frequency. I2C devices ran at rates of 100 kHz, 400 kHz or 1 MHz and now many are capable of going much faster.
So there it is...a kind of quick run down on communication protocol versus electrical interface specifications and how on some things they're both a part of the device and on many not so much. I also wanted to add a quick blurb about protocols in wired and wireless communication that also seems to be a point of contention for some people these days. I remember back in the day when I got my first 300 baud modem for my C= computer and had to write my own terminal program to get online since I didn't have one. But that didn't handle downloading files using X-Modem, Y-Modem, Z-Modem and Kermit. These were file transfer protocols designed to make sure the file you downloaded got from one PC to another intact.
I liken wireless communication to the old days of modems. In our wired world we don't worry too much about losing data or even corrupted data. When our microcontroller sends data to our serial LCD it is assumed to be intact and there is no mechanism for ensuring that, such as error-correction/checking. But in the wireless world things are very much like they were in the modem days with the possibility of noise or other interference that prevents your data from getting from point A to point B correctly. In many inexpensive analog and digital wireless systems data is sent on the radio frequency and all receivers on that frequency either get the data or not. If they do get it, they may get what was sent or something different. It is up to the end user to sort it all out.
But then we got Bluetooth and XBee transceivers which include error-checking and are able to retry for corrupted data. And what I see happening sometimes is that people use these systems thinking that since the built-in protocol does all the work for them that they can just send it and forget it. But that's not the case and I have seen people designing systems which can, in the event of a failure, reasonably be expected to cause damage or loss of life. It is in these systems that some people will assume that the data always gets there because the radio makes sure of that. Let me clarify what the protocol does do in a radio system like this...it makes sure that your data in intact when it arrives at the other side. But what if the data never arrives? With the aforementioned file transfer protocols there was an ACK/NAK system in place along with a CRC system to make sure that the data was received intact. But also with a timeout so that if the data never arrived both side terminated communication and you would have to try again. The protocol does allow for retries, but if after so many tries a message and ACK is not received it is assumed the other side is not listening anymore.
So on the Bluetooth and XBee Wireless systems you have to remember that the protocol handles the error side of things, but doesn't timeout if the data never arrives. It is still up to the user to implement a system to ensure that in a system where loss of communication can cause bad things to happen, you have to be ready to handle that. This means developing your own simple next level/layer protocol to handle the things the radio doesn't. For example, say I want to send a command to turn on a water pump in response to a flood warning from a sensor. The pump is wireless. I received the data from the sensor so I send a command for the pump to turn on. Because I am using an XBee radio I can be sure that if my command is received it will be intact, so I don't have to worry about corrupt data, however what if the pump never received the command at all? By having the pump send me an ACK response I know that it received my command. If I don't receive the ACK I can retry a certain number of times before perhaps sounding an alarm to notify someone of the issue.
Things can get more complicated than this as well. Let's take another example of the same system but now the sensor is wireless too. What if I never receive the alert from the sensor? Then I will never send the command to turn on the pump. So in a system like this we have two options...one is to have the sensor sending data in a timely manner and setting a timeout counter so that loss of data from that sensor is detected as a timeout and an alarmed sounded to let someone know. Another option is to query each sensor for data in a timely manner and if the sensor does not respond we sound the alarm alerting someone to the issue. In systems where safety is a concern it is important to implement your own protocol layer to ensure that everything is working and not depend on the error-checking in the radio protocol. I hope this information saves someone from a bad situation.
RS-232 defines the voltages present as well as the signal type; in this case single-ended. Similar to how RS-485 is defined using differential drive.
The serial protocol can be applied to RS-232, RS-485 or even wireless communication, however serial does not imply RS-232.
The August 19th, 2013, 12:44 post I quoted above explains in more details and provides links. I can't apologize because I've always been a stickler for keeping these things from becoming ambiguous. It's kind of like when someone pronounces Nuclear as Nucular. I can't not respond...
You may have your podium back now, sir.