Stanford Research Communication Program
  Home   Researchers Professionals  About
 
Archive by Major Area

Engineering
Humanities

Social Science

Natural Science

Archive by Year

Fall 1999 - Spring 2000

Fall 2000 - Summer 2001

Fall 2001 - Spring 2002

Fall 2002 - Summer 2003


 

 

 


How long does it take to read a digital message?

Joakim Jaldén
Signals, Sensors, and Systems
Royal Institute of Technology (KTH), Sweden
June 2003

Current digital communication requires the use of complex systems. At the end of a communication system is a receiver that translates the electrical signals to useful information such as images or sound (examples being a cell phone or laptop computer). The more complex the communication system, the more difficult it is to interpret the information that is sent. I am studying how much time it will take different receivers to make sense of what is sent to them. Understanding this will tell us not only which receiver is best to use, but also how to make it better.

Imagine trying to read your doctor's handwriting on a prescription. Even if you are not able to fully understand a single word on the note, you are usually able to form an idea of what he or she is trying to say. Using your knowledge that it is a doctor's note and that he or she is writing in English, you can usually figure out what the note says by considering whole sentences at once. The same is true for digital information, only that now the words are numbers, and the language is mathematics. As before, by looking at an entire message the information can be correctly interpreted even though individual parts of it are unclear. However, in order to teach a computer to properly interpret messages you need to explain to it how to do this. In other words you need a recipe, or an algorithm, for the computer to follow.

You could tell the computer to try all possible combinations of words and choose the most plausible. The problem with this is that even if the language only consists of two different words, a sentence of 10 words could be formed in more than a thousand ways. A sentence of 20 words could be formed in over a million ways. If it takes the computer some time to consider a sentence it will take it, using this algorithm, a thousand times longer to interpret 20 words instead of 10. Clearly we need more clever algorithms than to just try all combinations. A faster approach is to tell the computer to consider each word by itself, one by one, and make a decision. If it takes the computer some time to decide on 10 words it will only take it twice as long to decide on 20 following this algorithm. Unfortunately, this brings us back to the initial problem: the words by themselves are hard to interpret correctly. So, though it may be quicker there is usually a tradeoff between speed and the probability of correctly interpreting a message.

I study how computers solve problems like this in receivers for digital communication systems, such as a cell phone. I am interested in how accurately and quickly they do so. In particular, I am interested in how the time it takes to solve a problem increases with the size of the problem. In other words, what happens when we increase the length of the message from 10 to 20 words? I have specifically studied previously proposed algorithms and obtained mathematical expressions which explain how these algorithms behave when the problem size is changed. Fundamental questions my research tries to answer are: Does it take two or a thousand times longer to solve the problem if we double its size? Do we need to sacrifice accuracy in order to speed things up?