**Quantum to Classical Transition for Quantum Walks **

* Prof. Todd A. Brun, Dr. Hilary A. Carteret & Dr. Andris Ambainis *

Quantum walks are a unitary, reversible quantum analog of ordinary classiccal random walks. Instead of moving randomly right or left, we imagine a particle moving in a superposition of right and left.

In a classical random walk, the probability distribution is roughly Gaussian, and spreads with the square root of time; in the quantum walk, the distribution is oscillatory, and spreads linearly with time. If the quantum system is coupled to the environment, however, decoherence should make it behave classically in some limit.

In this work, we found an analytical expression for the second position moment of the distribution in the long time limit for the quantum walk with decoherence, and also for the quantum walk driven by a higher-dimensional "quantum coin." We were able to show that in the first case the moment grows like the square root of time at long times, just as in the classical case; and in the second, it grows linearly with time. In both cases, however, the spread is faster than the analogous classical spread.

Quantum walks are a subject of considerable interest at the moment, as it is hoped that they will lead to a new class of quantum algorithms. However, any realistic implementation will have to deal with the effects of noise and decoherence. This work is a step towards understanding that behavior.

Learn More:

**Todd A. Brun**, Hilary A. Carteret and Andris Ambainis, "The quantum to classical transition for random walks", Phys. Rev. Lett.**91**, 130602 (2003).

quant-ph/0208195.- Julia Kempe, "Quantum random walks - an introductory overview", Contemporary Physics
**44**, 307-327 (2003). quant-ph/0303081.

One of the biggest obstacles in certain quantum computation schemes is the difficulty of nondestructively measuring the spin of a single electron. One promising technique is magnetic resonance force microscopy, in which a microcantilever is driven at its resonance frequency by an oscillating spin.

This system is complicated. The cantilever suffers dissipation, and also has thermal noise. The spins precess at a very rapid rate compared to the oscillator. The position of the cantilever is monitored by optical techniques which have their own intrinsic shot-noise, and which produce back-action on the system.

We developed a very accurate adiabatic approximation, which allows one to eliminate the rapid spin precession from the description of the system. We were then able to numerically simulate this system, including all of the above sources of noise, and show how the response of the cantilever followed the localization of the spin state. We also developed expressions for the signal-to-noise ratio of the measurement.

Our results indicate that single-spin sensitivity is still two or three orders of magnitude off; but rapid experimental advances should make that possible within the next couple of years.

Learn More:**Todd A. Brun**and Hsi-Sheng Goan, "Realistic simulations of single-spin nondemolition measurement by magnetic resonance force microscopy," Phys. Rev. A**68**, 032301 (2003).

quant-ph/0302178.- J.A. Sidles, Appl. Phys. Lett.
**58**, 2854 (1991). - G.P. Berman, F. Borgonovi, H.-S. Goan, S.A. Gurvitz and V.I. Tsifrinovich, Phys. Rev. B
**67**, 094425 (2003).

**Iterative Message Passing for Rapid PN Acquisition **

* Prof. Keith M. Chugg & Ph.D. Candidate Mingrui Zhu*

Spread spectrum and Ultrawideband systems use pseudo-random sequences to allow multiple users to share the channel and mitigate interference.

The patterns are periodic, so that a receiver that knows the pattern can synchronize autonomously. Longer patterns are preferred, but are more difficult to synchronize. Traditional methods for synchronization are either slow with simple hardware (serial search) or fast with complex hardware (parallel search).

In this work, we found a solution that was both fast and required only simple hardware. We accomplished this by realizing that many of the most popular pseudo-random sequences (M-sequences) could be modeled in a way analogous to the revolutionary turbo-like codes. An example of this concept is given in the figure above: here a PN generator with 2^15 possible states is decomposed into a PN generator with two states and another system that involves only a delay element.

Thus, the synchronization algorithm that we developed is essentially a generalization of the turbo-decoding algorithm (iterative message passing).

Learn More:

- M. Zhu and
**K. M. Chugg**, "Iterative Message Passing Techniques for Rapid Code Acquisition," in Proc. IEEE Milcom 2003, Oct. 2003.*(This paper received the Fred W. Ellersick Award for***Best Paper**in the Unclassified Program.) - M. K. Simon, J. K. Omura,
**R. A. Scholtz**, and B. K. Levitt,__Spread Spectrum Communications Handbook__, McGraw-Hill, 1994. **K. M. Chugg**, A. Anastasopoulous, and X. Chen,__Iterative Detection: Adaptivity, Complexity Reduction, and Complexity Reduction__, Kluwer Academic Publishers, 2000.

Polarization mode dispersion (PMD) is a crucial limitation in optical communication systems due to either high-PMD legacy fiber or PMD of many in-line components. PMD is based on the concept that the same spectral component of optical data splits on two orthogonal states of polarization (i.e. principal states of polarization (PSPs)) of fiber and travel down the fiber at slightly different speed. Deleterious PMD effects are stochastic, time-varying, temperature-dependent, and worsen as the bit rate rises. Moreover, the instantaneous first order PMD (i.e. differential group delay (DGD)) follows a Maxwellian probability distribution, always with some finite possibility of a network outage.

Both electrical and optical techniques are available for mitigating PMD in an optical link. As PMD varies on a millisecond time scale, both electrical and optical PMD mitigators must be dynamic, with a feed-forward or feedback loop that monitors the quality of the incoming signal and provides a control signal to the PMD mitigators. It has been shown that the use of a signal�s DOP has some advantages over other techniques for PMD monitoring, as it is not affected by other distortion sources such as chromatic dispersion.

Unfortunately, DOP measurements as a function of instantaneous DGD suffer from the following crucial systems disadvantages: (i) there is a small DGD monitoring window when measuring a short pulse return-to-zero (RZ) signals, and (ii) there is a lack of sensitivity when measuring a non-return-to-zero (NRZ) signal. It would be highly desirable to have a PMD monitor for which the DOP can be measured to obtain wide DGD monitoring windows and wide dynamic range for both RZ and NRZ signals.

In this work, we propose that partial optical filtering can dramatically enhance the DGD monitoring range and/or DOP dynamic range/sensitivity to DGD in DOP-based DGD monitors. We apply this technique to 10 Gbit/s NRZ, carrier-suppressed RZ (CSRZ), alternate-chirped RZ (ACRZ), and 40 Gbit/s RZ and conclude that the use of symmetric (centered on the central frequency component in the optical spectrum) and asymmetric (offset from the center of the optical spectrum by the bit rate frequency) filtering using a optical filter can offer significant increases in the DGD monitoring range of these systems. However, even given equal bit rate and pulse width, the filter shape, order, and bandwidth required to obtain maximum DGD monitoring range and DOP dynamic range can vary as the data modulation format changes.

Learn More:**Alan E. Willner**, S.M. Reza Motaghian Nezam, L.-S. Yan, Z. Pan, and M.C, Hauer, "Tutorial on Monitoring and Control of Polarization-related Impairments in Optical Fiber Systems," (Invited Tutorial Paper)*IEEE/OSA Journal of Lightwave Technology*, Special Issue on Polarization-Mode-Dispersion, vol. 21, 2003.- S.M. Reza Motaghian Nezam, Lianshan Yan, John E. McGeehan, Yongqiang Shi,
**Alan E. Willner**, and Steve Yao, "Enhancing the Dynamic Range and DGD Monitoring Windows in DOP-based DGD Monitors Using Symmetric and Asymmetric Partial Optical Filtering," to appear in*IEEE/OSA Journal of Lightwave Technology*, Special Issue on Polarization-Mode-Dispersion, vol. 21, 2003. - S.M. Reza Motaghian Nezam, Lianshan Yan, John E. McGeehan, Yongqiang Shi,
**Alan E. Willner**, and Steve Yao, "Wide-Dynamic-Range DGD Monitoring by Partial Optical Signal Spectrum DOP Measurement," in*the Technical Digest of Optical Fiber Communication Conference and Exhibit (Post-deadline paper), IEEE/OFC 2002*, pp. 924-926.

Sensor networks have emerged as a fundamentally new tool for monitoring inaccessible environments. They are distinguished from traditional networks by strict limitations on system bandwidth and sensor energy resources. These constraints motivate the use of methods to reduce communication or use power more efficiently at each sensor. We have considered a broad variety of problems in the area of sensor network research from optimizing compression schemes for target localization to decentralized boundary estimation which could be used for identifying chemical spills. For example, in classical compression, the objective is to compress such that the original source signal can be reconstructed or is very close to the original signal in some sense. However, if the goal is localization, the actual form of the original signal may not be as relevant. In our boundary estimation work, we considered an estimation strategy that balanced between estimation accuracy and the cost of communication across the sensor network. Several information theoretic problems have been considered as well, including the investigation of the capacity of ad hoc wireless networks where there are power and complexity constraints. Thus, cooperation amongst sensor nodes for the goal of communication across a network would generally be beneficial if the cost of cooperation were free; however, this is not the case in a sensor network since power is a precious commodity. We are also investigating physical layer algorithms which will realize some of the gains promised by our information theoretic studies.

Learn More:- R. Nowak,
**U. Mitra**, and R. Willett, "Estimating Inhomogeneous Fields Using Sensor Networks" IEEE Journal on Selected Areas of Communications, Special Issue on Fundamental Performance Limits of Wireless Sensor Networks, accepted for publication, see also IPSN'03, Palo Alto, CA. - L. Vasudevan, A. Ortega and
**U. Mitra**, "Application-aware Quantization for Time Delay Estimation in Sensor Networks", IEEE Workshop on Statistical Signal Processing, (**invited paper**) St. Louis, September 2003. **U. Mitra**and A. Sabharwal, "On Achievable Rates of Complexity Constrained Relay Channels," Allerton Conference, Monticello, IL, October 2003.

There has been recent renewed interest in optical code-division-multiple-access (OCDMA) due to its potential for enhanced data security, especially when considering the fine granularity of traffic and need for asynchronous data transfer in local area networks (LANs). However, a key drawback for OCDMA has been the necessity of generating, propagating, and detecting extremely short optical pulses in a specific chip time (i.e. the time-domain subdivision of a bit) such that: (i) there is sufficient orthogonality among the codes, and (ii) a large number of codes can be accommodated.

The use of time-domain chips can be considered a "one-dimensional" (1D) coding scheme for OCDMA. One approach for partially alleviating the small chip time has been the introduction of a two-dimensional (2D) OCDMA architecture, in which each bit is subdivided into a combination of chip times and a discrete set of wavelengths. However, even with a time/wavelength approach, a reasonable number of a wavelengths and chip times still cannot easily accommodate many users - for example, 11 chip times and 4 wavelengths will accommodate only 12 users. We take advantage of the fact that light can be transmitted on two orthogonal polarization states to use polarization as a third degree of freedom when generating OCDMA user codes. Such a three-dimensional (3D) user code uses the two polarization states along with chip times and discrete wavelengths to uniquely identify users such that each code is sufficiently orthogonal for high autocorrelation and low cross-correlation. This technique of polarization coding can significantly increase the number of users, under the condition that the polarization states do not overly couple. This condition would hold within a typical OCDMA LAN environment using newer fiber.

In this work, we demonstrate a 3D time-wavelength-polarization encoding and decoding OCDMA system. We generate a 3D codeset where a given user has chips encoded in time, wavelength, and polarization such that each individual user's code is polarization-rotation-invariant with respect to any other user's code. We encode 1 Gbit/s, 11 Gchip/s data using free-space and fiber delay lines and polarization beam combiners and decode using a polarization beamsplitter, wavelength demultiplexers, and additional fiber/free space delays. We use separate electronic decision circuits on each polarization's code to ensure the existence of a "1" bit, and then AND the two decisions together to obtain a final 1 Gbit/s data output. This technique suffers only ~1.8 dB of power penalty to receiver sensitivity at 10-9 bit-error-rate.

Learn More:- J. E. McGeehan, S.M.Reza Motaghian Nezam, P. Saghari, T.H. Izadpanah,
**A. E. Willner**, Reza Omrani, and P.V.Kumar, "3D Time-Wavelength-Polarization OCDMA Coding for Increasing the Number of Users in OCDMA LANs," to appear in*Optical Fiber Communication Conference, IEEE/OFC 2004.* - J. A. Salehi et al,
*IEEE Transactions on Communications, vol. 37, no. 8, pp. 824-833, 1989.* - L. Tancevski et al,
*IEEE/OSA Journal of Lightwave Technology, vol. 14, no. 12, pp. 2636-2647, 1996.*

Information theory sets a fundamental limit on the trade-off between the distortion of a compression algorithm and the number of bits at the output of the algorithm (rate). An asymptotically optimal lossy source coding algorithm is an algorithm that approaches this rate-distortion limit as the encoding block length tends to infinity.

For such an algorithm, when coding complexity is taken into account, the performance is measured by the redundancy of the algorithm which is defined as the difference between its expected performance and the rate-distortion function of the source.

Following the recent redundancy results of block codes (VQ) and trellis codes (TCQ), we investigated the rate-distortion performance of various structured source coding algorithms, such as successive refinement codes (TSVQ), multiple description source codes (MD-VQ and MD-TCQ), encoding correlated Gaussian source (Transform-VQ) and fixed-rate universal trellis codes (EC-TCQ). Our theoretical analysis of these methods not only supports the results of simulation-based experiments, but also provides some new approaches to improve the efficiency of existing algorithms.

In fact, the redundancy analysis that we developed is a practical generalization of the Shannon source coding theorem to the more concrete cases.

Learn More:- J. Yang and
**Z. Zhang**, "Rate-distortion performance of correlated Gaussian source",*Proc. IEEE ISIT 2003*, pp. 143. - J. Yang and
**Z. Zhang**, "On the redundancy of universal trellis lossy source coding",*Proc. IEEE ISIT 2003*, pp. 168. **Z. Zhang**, E.-H. Yang, and V. K. Wei, "The redundancy of source coding with a fidelity criterion--Part I: Known statistics,"*IEEE Transactions on Information Theory*, vol. 43, pp. 71-91, Jan. 1997.

Signals in a wireless channel typically suffer from fading--amplitude fluctuations due to multiple copies of the received signal being added constructively and destructively. If proper compensation is not considered, fading can severely degrade the performance of a wireless system. One method for dealing with fading, is to introduce diversity, or multiple independent paths across which the signal is transmitted. Space-time systems exploit specifically designed signals for transmission over multiple transmit and receive antennae to generate and capture different copies of signal and improve performance.

Our work in space-time systems has considered: channel equalization for space-time systems, performance analysis, and space-time modulation and code construction. Our equalization work specifically exploits the structure of a popular space-time modulation scheme. The unitary space-time modulation was designed for simple flat fading channels; however with our proposed low complexity equalizer, such modulations can now be employed in multipath fading channels. We have developed a host of tools for analyzing space-time systems both at low and high signal-to-noise ratios, thus removing the need for simulation to understand system performance and also providing methods by which systems can be designed for low signal-to-noise ratio environments. We have also designed optimized space-time modulations for multiuser, spread spectrum systems as well as narrowband systems. Our code constructions are systematic and readily lend themselves for use in trellis-coded modulation. Thus, we have been able to devise coded modulation for an arbitrary number of transmit antennae, block sizes, rates and complexity.

Learn More:

- J. Geng, M. Vajapeyam and
**U. Mitra**, "Union Bound of Space-Time Block Codes and Decomposable Error Pattern",*ISIT 2003*. - E. Aktas and
**U. Mitra**, "Blind equalization for an application of unitary space-time modulation in ISI channels", in*IEEE Transactions on Signal Processing*, Volume: 51 Issue: 11, Nov. 2003, Pages: 2931-2942. - J. C. Guey, M. P. Fitz, M. R. Bell and W-Y. Kuo, "Signal Design for Transmitter Diversity Wireless Communication Systems over Rayleigh Fading Channels", in
*IEEE Transactions on Communications*, Volume: 46, pages: 527-537, April 1999. - V. Tarokh, N. Seshadri and A. R. Calderbank, "Space-Time Codes for High Data Rate Wireless Communication: Performance Criterion and Code Construction", in
*IEEE Transactions on Information Theory*, March 1998.

Satellite and wireless networks operate over time-varying channels that depend on attenuation conditions, power allocation decisions, and inter-channel interference. In order to reliably integrate these systems into a high speed data network and meet the increasing demand for high throughput and low delay, it is necessary to develop efficient network layer strategies that fully utilize the physical capabilities of each network element.

We develop the notion of *network layer capacity* and describe capacity achieving power allocation and routing algorithms for general networks with wireless links and adaptive transmission rates. The algorithms do not require knowledge of arrival rates or channel statistics. Fundamental issues of delay, throughput optimality, fairness, implementation complexity, and robustness to time varying channel conditions and changing user demands are discussed. Analysis is performed at the packet level and fully considers the queueing dynamics in systems with arbitrary, potentially bursty, arrival processes.

We further consider ad-hoc mobile networks with a special cell-partitioned structure and a simplified mobility model. Exact expressions for capacity and end-to-end delay are derived. To reduce delay, a transmission protocol which sends redundant packets over multiple paths is developed. For large networks, the protocol reduces delay by orders of magnitude at the cost of decreasing throughput. A fundamental delay/rate tradeoff curve is established, and our protocols are shown to operate on distinct boundary points of this curve.

Learn More:**M.J. Neely**, E. Modiano, and C.E. Rohrs, "Dynamic Power Allocation and Routing for Time Varying Wireless Networks," in*IEEE INFOCOM Proceedings*, April 2003.**M.J. Neely**, E. Modiano, and C. E. Rohrs, "Power Allocation and Routing in Multi-Beam Satellites with Time Varying Channels,"*IEEE Transactions on Networking*, Feb. 2003.**M.J. Neely**, Jun Sun, Eytan Modiano, "Delay and Complexity Tradeoffs for Dynamic Routing and Power Allocation in a Wireless Network," Proceedings of the 40th Annual Allerton Conference on Communication, Control, and Computing, Oct. 2002.

**Achieving the Rate-Diversity Tradeoff of Space-Time Codes**

* Prof. P. Vijay Kumar & Dr. Hsiao-feng Lu (Former Ph.D. Student of Prof. Kumar)*

A key method to combating fading is to incorporate diversity in some form into the communication system. A large amount of diversity ensures that the probability of erroneous reception decays rapidly with increase in signal to noise ratio (SNR).

Recently there has been a focus on transmit diversity, i.e., the use of multiple transmit antennas as a means of incorporating diversity. The use of multiple transmit antennas must however, be accompanied by the use of intelligent signal design that takes advantage of the presence of the multiple antennas.

It turns out that for a fixed signal constellation, there is a tradeoff between the rate at which information can be transmitted and the rapidity (diversity gain) with which the error probability decays with increase in SNR.

In this work, we have recently succeeded in constructing signal sets that are capable of achieving any given point on the rate-diversity gain tradeoff. Our signal design is able to accommodate constellations of practical importance such as 2^{K }-ary Phase-Shift-Keying (PSK), Pulse-Amplitude Modulation (PAM) and Quadrature Amplitude Modulation (QAM). ** Learn More: **

- H. Lu and
**P. V. Kumar**, "**Rate-diversity tradeoff of space-time codes with fixed alphabet and optimal constructions for PSK modulation**,"*IEEE Trans Information Theory*, Vol. 49, No. 10, Oct. 2003 - H. Lu and
**P. V. Kumar**, "**Algebraic constructions of optimal space-time trellis codes**," in Proc.*IEEE GLOBECOM 2003*, Dec 2003 - H. Lu, Y. Wang,
**P. V. Kumar,****K. M. Chugg**, "**Remarks on space-time codes including a new lower bound and an improved code**,"*IEEE Trans Information Theory*, Vol. 49, No. 10, Oct. 2003 - V. Tarokh, N. Seshadri, and A. R. Calderbank, "
**Space-time codes for high data rate wireless communication: performance criterion and code construction,"***IEEE Trans Information Theory*, Vol. 44, No. 2, March 1998

**Wireless Resource Management **

* Prof. Mehmet Akar, Prof. Urbashi Mitra, Ph.D. Candidate Ayanendu Paul, & Prof. Michael G. Safonov *

Dynamic resource allocation in wireless networks is important to maintain reliable communication links between base stations and mobile users. In order to achieve this objective, the transmitted powers/rates, base station assignments, and allocated channels may need to be updated as the mobile user moves in a cellular environment or when a new call is admitted to the network. Although these resources can be controlled individually, joint allocation improves capacity and battery life, and decreases interference.

In this work, we use control theoretic methods including tools of hybrid/switched systems, adaptive control, optimization, and dynamic programming to develop distributed, fast and robust resource allocation algorithms that take into account the effects of channel fading and mobility, and achieve a tradeoff between the satisfaction levels of the mobile users and the network operator, thereby provide satisfactory service for the users while reducing the burden on the network such as undesired switching between base stations.

Learn More:- M. Akar and
**U. Mitra**, "Joint downlink power and handoff control using a hybrid systems framework," in Proceedings of the 22nd Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM 2003), vol. 3, pp. 1614-1621, San Francisco, CA, April 2003. - M. Akar and
**U. Mitra**, "Soft handoff algorithms for CDMA cellular networks," in IEEE Transactions on Wireless Communications, vol. 2, no. 6, pp. 1259-1274, November 2003. - M. Akar and
**U. Mitra**, "Variations on optimal and suboptimal handoff control for wireless communication networks," in IEEE Transactions in Selected Areas in Communications: Wireless, vol. 19, no. 6, pp. 1173-1185, June 2001.

Ultra-wideband (UWB) is an emerging technology for short range, robust wireless indoor communication as well as a methodology for high performance localization. UWB is characterized by extremely large bandwidth transmission, on the order of several GigaHertz. Because of the large bandwidth, a high degree of multipath diversity is possible; with proper receiver designs, this multipath can be captured and will lead to very high fidelity communications. The ultra wide bandwidth also implies the potential for very fine time resolution. Our research has focussed on the design of high performance receivers and channel estimators for UWB systems. The objective is to balance complexity with performance.

Recent propagation studies suggest that the multiple paths in the UWB channel occur in clusters. Thus, we are considering various channel estimation approaches which exploit the clustered nature of the channel. We find that by considering the structure of the channel we can achieve both a complexity reduction in the estimator and improved performance over non-clustered methods. We are also exploring the channel capacity of the UWB channel by considering the existence of such clusters.

Another angle of attack is to not explicitly estimate the channel, but rather to design modulation and receiver structures where the channel is inferred. Transmitted reference modulation is such a scheme. In transmitted reference schemes, pilot signals are interspersed with data signals. We have designed optimized transmitted reference schemes which can offer several dB gain over traditional transmitted reference schemes. A benefit of transmitted reference schemes is that they do not require knowledge of the structure of the channel and are not sensitive to mismatch in the assumed model of the channel.

Intertwined with channel estimation is the timing recovery problem. Due to the short duration of the UWB signals, timing acquisition appears as one of the most critical issues and poses another challenge to the deployment of the UWB radio. In our work we have investigated both timing recovery and channel estimation.

Learn More:

- S. Franz and
**U. Mitra**, "On Optimal Data Detection for UWB Transmitted Reference Systems," in*Proc. IEEE Globecom Conference*, San Francisco, December 2003. - C. Carbonelli, U. Mengali and
**U. Mitra**, "Synchronization and channel esimation for UWB signals," in*Proc. IEEE Globecom Conference*, San Francisco, December 2003. - M. Z. Win and
**R. A. Scholtz**, "Impulse Radio: How it works," IEEE Communications Letters, Vol. 2, Feb.1998, pp. 36-38. - C. K. Rushforth, "Transmitted-Reference Techniques for Random or Unknown Channels,"
*IEEE Transactions on Information Theory*, January 1964.

A soft-inverse for a system is the locally optimal processor that updates soft (decision) information on the digital input and output variables. For example, a turbo decoder uses a soft-inverse for each of two constituent decoders and iteratively activates these soft-inverses or soft-in/soft-out (SISO) decoders.

A particularly important type of system is a finite state machine (FSM), which models the constituent codes of a turbo code, as well as many other channels and encoders in digital communication systems. The standard method for implementing the soft-inverse of a finite state machine (FSM) is the forward-backward algorithm (FBA). The FBA may be viewed as a generalization of the Viterbi algorithm - i.e., the FBA runs one Viterbi-like recursion on the FSM trellis in each the forward and backward directions. As such, the FBA inherits the so-called "ACS (add, compare, select)-bottleneck" that makes fast implementation of the decoder difficult.

In this work, we found a tree-structured architecture for computing the soft-inverse of an FSM. Compared to the FBA, this provides an exponential speed-up by avoiding the classic ACS-bottleneck. We arrived at this architecture by showing that the soft-inverse problem is equivalent to another simple computational problem: computing partial sums. Thus, the "tree-SISO" developed is essentially the SISO version of a fast, tree-adder.

Learn More:- P. A. Beerel and
**K. M. Chugg**, "A Low Latency SISO with Applications to Broadband Turbo Decoding,"*IEEE J. Selected Areas in Communications,*vol. 19, May 2001, pp. 860-870. - P. Thiennviboon and
**K. M. Chugg**, "A Low-Latency SISO via Message Passing on a Binary Tree,"*Allerton Conf.*, Urbana, IL, Oct. 2000. - G. Fettweis and H. Meyr. "Parallel Viterbi algorithm implementation: Breaking the ACS-bottleneck,"
*IEEE Trans. Commununication*, 37:785-790, August 1989. **K. M. Chugg**, A. Anastasopoulous, and X. Chen,__Iterative Detection: Adaptivity, Complexity Reduction, and Complexity Reduction__, Kluwer Academic Publishers, 2000.

One of the biggest obstacles in certain quantum computation schemes is the difficulty of nondestructively measuring the spin of a single electron. One promising technique is magnetic resonance force microscopy, in which a microcantilever is driven at its resonance frequency by an oscillating spin.

This system is complicated. The cantilever suffers dissipation, and also has thermal noise. The spins precess at a very rapid rate compared to the oscillator. The position of the cantilever is monitored by optical techniques which have their own intrinsic shot-noise, and which produce back-action on the system.

We developed a very accurate adiabatic approximation, which allows one to eliminate the rapid spin precession from the description of the system. We were then able to numerically simulate this system, including all of the above sources of noise, and show how the response of the cantilever followed the localization of the spin state. We also developed expressions for the signal-to-noise ratio of the measurement.

Our results indicate that single-spin sensitivity is still two or three orders of magnitude off; but rapid experimental advances should make that possible within the next couple of years.

Learn More:**Todd A. Brun**and Hsi-Sheng Goan, "Realistic simulations of single-spin nondemolition measurement by magnetic resonance force microscopy," Phys. Rev. A**68**, 032301 (2003).

quant-ph/0302178.- J.A. Sidles, Appl. Phys. Lett.
**58**, 2854 (1991). - G.P. Berman, F. Borgonovi, H.-S. Goan, S.A. Gurvitz and V.I. Tsifrinovich, Phys. Rev. B
**67**, 094425 (2003).

Modern VLSI circuits often comprise millions or tens of millions of logic gates. With current test procedures, a chip is rejected if a single error is discovered in this logic. This could be as simple as a particular wire or gate with a "stuck-at-zero" fault -- i.e., the fault results in this point being held at logic level zero. Traditional automated test procedures (ATPs) do not consider the functionality of the deceive under test, they only search for faults in the low-level underlying logic.

Even though the probability of a given gate having an error is very low, the sheer number of gates make it increasingly likely that an error will occur somewhere on the chip. Thus, as the field moves towards larger system on a chip solutions, we expect that the yield will decrease substantially. This results in increased cost.

This work is performed in collaboration with Profs. Gupta, Ortega, Breuer of EE-Systems and is aimed at error tolerant circuit design and test to address the decreased yield problem motivated above. Error tolerance means that even when a low-level fault is found, the overall functionality of the device may be acceptable to the end user. We expect this to be the case for applications such as video processing and communication modems. Our role in this project is to investigate the error tolerance of a modern (iterative) codec. In particular, we have built a systematic repeat accumulate code using an FPGA prototyping board. This design is based on realistic hardware architectures and allows for differing degrees of parallelism and thus decoding speed -- i.e., from approximately 4 Mbps to 40 Mbps. We are currently using this as a testbed to study which errors most impact the data error rate performance and what automated test methods can be applied to detect unacceptable circuit errors.

Learn More:- See this USC Viterbi School of Engineering article
- H. Jin, A. Khandekar, and R. McEliece, "Irregular Repeat-Accumulate Codes,"
*2nd Internation Symposium on Turbo Codes and Related Topics*, Brest, France, 2000. - J. Dielissen and J. Huisken, "State vector reduction for initialization of sliding windows MAP",
*2nd Internation Symposium on Turbo Codes and Related Topics,*Brest, France, 2000, pp. 387-390. - A. Abbasfar and K. Yao, "An Efficient and Practical Architecture for High Speed Turbo Decoders",
*IEEE 58th Vehicular Technology Conference*, Oct. 2003, pp. 337-341.

Ultra-wideband (UWB) radio uses signals with fractional bandwidth (3dB bandwidth divided by center frequency) greater than 25%.

Unlike narrowband signals, the received signal in an UWB system often bears little resemblance to the signal driving the transmitter's antenna. Waves reflecting off or penetrating through objects in the channel can undergo significant filtering, and the antennas at both the transmit and receive ends cause pulse-shaping that can vary with direction of transmission and reception. The result is the received pulse shape associated with a given path is dependent on that path.

This work provides a needed algorithm, called the Sensor-CLEAN algorithm, for taking into account these special bandwith-dependent effects, so that quantitative comparisons of the UWB channel can be made with more narrowband results, and the performance of UWB communication systems predicted.

The algorithm was applied to measured indoor propagation data to develop models for the time- and angle-of-arrival of UWB signals, which combined with the Sensor-CLEAN method for processing measured data also enables the future statitstical description of propagation environments in other building architectures and geometries.

Learn More:- J.M. Cramer,
**R. A. Scholtz**and M.Z. Win, "Evaluation of an Ultra-Wideband Propagation Channel,"*IEEE Transactions on Antennas and Propagation*, Vol. 50, May 2002, pp. 561-570.*(This paper received the 2003 A. Shelkunoff Transactions***Prize Paper Award**of the IEEE Antennas and Propagation Society) - M. Z. Win and
**R. A. Scholtz**, "Impulse Radio: How it works,"*IEEE Communications Letters*, Vol. 2, Feb.1998, pp. 36-38. - Q. Spencer, M. Rice, B. Jeffs and M. Jensen, "A Statistical Model for the Angle-of-Arrival in Indoor Multipath Propagation,"
*IEEE Vehicular Technology Conference*, Vol. 7, May 1997, pp. 1415-1419.

Code-division multiple-access (CDMA) allows multiple users to share the transmission channel simultaneously both in time and frequency domains. Proper demodulation of multiuser CDMA signals requires the efficient mitigation of interference caused by the presence of these other users. Furthermore, in practical wireless channels, spread-spectrum signals experience multipath distortion which also must be considered in receiver design. Thus a challenge for such systems is to consider joint multiple access interference and multipath compensation. If the channel is time-varying, this can be even more challenging.

One approach we have considered is to develop improved estimators of the channel and interference statistics by exploiting the structure of CDMA signals. Such a methodology can offer strong gains in performance over more classical adaptive least-mean squares or recursive least squares receivers for multiuser systems.

We have also designed a class of adaptive receivers which do not require any training information and work well in fast fading environments. The schemes are based on linear interference suppression schemes which optimize a variety of cost functions (minimum-mean squared error, minimum output energy, maximum likelihood and best linear unbiased estimators). These methods project the received signal onto a smaller dimensional signals space thus reducing the dimensionality of the receiver as well. This dimension reduction has two benefits: fewer parameters have to be determined thus reducing complexity and the receiver is more agile to changes in the channel.

An interesting feature of our work is that all of the receivers share a common structure which has implications for implementation especially for the non-coherent schemes we have developed which require a combination of receivers.

Learn More:

- W. Chen and
**U. Mitra**, "An improved blind adaptive MMSE receiver for fast fading DS-CDMA channels," in*IEEE Journal in Selected Areas in Communications,*Vol. 19 Issue: 9, Aug. 2001, pp. 1531-1543. - W. Chen,
**U. Mitra**and P. Schniter, "On the equivalence of three reduced rank linear estimators with applications to DS-CDMA," in*IEEE Transactions on Information Theory,*Volume: 48 Issue: 9, Sep. 2002, pp. 2609-2614. - J. S. Goldstein,
**I. S. Reed**and L. L. Scharf, "A Multistage Representation of the Wiener Filter Based on Orthogonal Projections,"*IEEE Trans. on Information Theory*, Vol. 44, No. 7, pp. 2943-2959, Nov. 1998.

Quantum walks are a unitary, reversible quantum analog of ordinary classiccal random walks. Instead of moving randomly right or left, we imagine a particle moving in a superposition of right and left.

In a classical random walk, the probability distribution is roughly Gaussian, and spreads with the square root of time; in the quantum walk, the distribution is oscillatory, and spreads linearly with time. If the quantum system is coupled to the environment, however, decoherence should make it behave classically in some limit.

In this work, we found an analytical expression for the second position moment of the distribution in the long time limit for the quantum walk with decoherence, and also for the quantum walk driven by a higher-dimensional "quantum coin." We were able to show that in the first case the moment grows like the square root of time at long times, just as in the classical case; and in the second, it grows linearly with time. In both cases, however, the spread is faster than the analogous classical spread.

Quantum walks are a subject of considerable interest at the moment, as it is hoped that they will lead to a new class of quantum algorithms. However, any realistic implementation will have to deal with the effects of noise and decoherence. This work is a step towards understanding that behavior.

Learn More:**Todd A. Brun**, Hilary A. Carteret and Andris Ambainis, "The quantum to classical transition for random walks", Phys. Rev. Lett.**91**, 130602 (2003).

quant-ph/0208195.- Julia Kempe, "Quantum random walks - an introductory overview", Contemporary Physics
**44**, 307-327 (2003). quant-ph/0303081.

This page contains brief summaries of recent research conducted by CSI researchers. This content will change each time the page is loaded or you may view all summaries.

We distiguish between two modeling problems: model extraction and model construction. The construction problem has been largely addressed by the coding theory communtity: given a set of design parameters, there exist well-known methods for designing codes with graphical models that imply low-complexity, near-optimal decoding algorithms. The extraction problem, however, is largely unsolved. Given an arbitrary linear block code, there exist no known methods for extracting good graphical models. For specific codes, there do exist good graphical models. One such example is the tail-biting trellis-based model of the Golay code. Current known good graphical models of specific codes depend on particular algebraic properties of those codes and do not extend to arbitrary codes. We are developing heuristics for the systematic extraction of good graphical models of arbitrary linear block codes and also in studying the fundamental tradeoff between the algebraic structure of a code and the complexity of allowable graphical models with a given topology.

Learn More:- T. R. Halford and
**K. M. Chugg**, "An Algorithm for Counting Cycles in Bipartite Graphs," in IEEE Trans. Information Theory, Jan. 2006. - G. D. Forney Jr., "Codes on Graphs: Normal Realizations", in IEEE Trans. Information Theory, Feb. 2001.