IEEE Transactions on Signal Processing
TOC Alert for Publication# 78
Updated: 1 day 1 hour ago
Diffusion-based communication refers to the transfer of information using molecules as message carriers whose propagation is governed by the laws of molecular diffusion. It has been identified that diffusion-based communication is one of the most promising solutions for end-to-end communication between nanoscale devices. In this paper, the design of a diffusion-based communication system considering stochastic signaling, arbitrary orders of channel memory, and noisy reception is proposed. The diffusion in the cases of one, two, and three dimensions are all considered. Three signal processing techniques for the molecular concentration with low computational complexity are proposed. For the detector design, both a low-complexity one-shot optimal detector for mutual information maximization and a near Maximum Likelihood (ML) sequence detector are proposed. To the best of our knowledge, our paper is the first that gives an analytical treatment of the signal processing, estimation, and detection problems for diffusion-based communication in the presence of ISI and reception noise. Numerical results indicate that the proposed signal processing technique followed by the one-shot detector achieves near-optimal throughput without the need of a priori information in both short-range and long-range diffusion-based communication scenarios, which suggests an ML sequence detector is not necessary. Furthermore, the proposed receiver design guarantees diffusion-based communication to operate without failure even in the case of infinite channel memory. A channel capacity of 1 bit per channel utilization can be ultimately achieved by extending the duration of the signaling interval.
In this paper, we focus on separable convex optimization problems with box constraints and a specific set of linear constraints. The solution is given in closed-form as a function of some Lagrange multipliers that can be computed through an iterative procedure in a finite number of steps. Graphical interpretations are given casting valuable insights into the proposed algorithm and allowing to retain some of the intuition spelled out by the water-filling policy. It turns out that it is not only general enough to compute the solution to different instances of the problem at hand, but also remarkably simple in the way it operates. We also show how some power allocation problems in signal processing and communications can be solved with the proposed algorithm.
Regularized <formula formulatype="inline"><tex Notation="TeX">$M$</tex> </formula>-Estimators of Scatter Matrix
In this paper, a general class of regularized $M$-estimators of scatter matrix are proposed that are suitable also for low or insufficient sample support (small $n$ and large $p$) problems. The considered class constitutes a natural generalization of $M$-estimators of scatter matrix (Maronna, 1976) and are defined as a solution to a penalized $M$-estimation cost function. Using the concept of geodesic convexity, we prove the existence and uniqueness of the regularized $M$-estimators of scatter and the existence and uniqueness of the solution to the corresponding $M$-estimating equations under general conditions. Unlike the non-regularized $M$-estimators of scatter, the regularized estimators are shown to exist for any data configuration. An iterative algorithm with proven convergence to the solution of the regularized $M$-estimating equation is also given. Since the conditions for uniqueness do not include the regularized versions of Tyler's $M$-estimator, necessary and sufficient conditions for their uniqueness are established separately. For the regularized Tyler's $M$ -estimators, we also derive a simple, closed form, and data-dependent solution for choosing the regularization parameter based on shape matrix matching in the mean-squared sense.- Finally, some simulations studies illustrate the improved accuracy of the proposed regularized $M$-estimators of scatter compared to their non-regularized counterparts in low sample support problems. An example of radar detection using normalized matched filter (NMF) illustrate that an adaptive NMF detector based on regularized $M$-estimators are able to maintain accurately the preset CFAR level.
Presents the table of contents for this issue of this publication.
Presents the table of contents for this issue of this publication.
We show that the generalized total least squares (GTLS) problem with a singular noise covariance matrix is equivalent to the restricted total least squares (RTLS) problem and propose a recursive method for its numerical solution. The method is based on the generalized inverse iteration. The estimation error covariance matrix and the estimated augmented correction are also characterized and computed recursively. The algorithm is cheap to compute and is suitable for online implementation. Simulation results in least squares (LS), data least squares (DLS), total least squares (TLS), and restricted total least squares (RTLS) noise scenarios show fast convergence of the parameter estimates to their optimal values obtained by corresponding batch algorithms.
Statistics of the MLE and Approximate Upper and Lower Bounds—Part I: Application to TOA Estimation
In nonlinear deterministic parameter estimation, the maximum likelihood estimator (MLE) is unable to attain the Cramér–Rao lower bound at low and medium signal-to-noise ratios (SNRs) due the threshold and ambiguity phenomena. In order to evaluate the achieved mean-squared error (MSE) at those SNR levels, we propose new MSE approximations (MSEA) and an approximate upper bound by using the method of interval estimation (MIE). The mean and the distribution of the MLE are approximated as well. The MIE consists in splitting the a priori domain of the unknown parameter into intervals and computing the statistics of the estimator in each interval. Also, we derive an approximate lower bound (ALB) based on the Taylor series expansion of noise and an ALB family by employing the binary detection principle. The accuracy of the proposed MSEAs and the tightness of the derived approximate bounds are validated by considering the example of time-of-arrival estimation.
Statistics of the MLE and Approximate Upper and Lower Bounds–Part II: Threshold Computation and Optimal Pulse Design for TOA Estimation
Threshold and ambiguity phenomena are studied in Part I of this paper where approximations for the mean-squared error (MSE) of the maximum-likelihood estimator are proposed using the method of interval estimation (MIE), and where approximate upper and lower bounds are derived. In this part, we consider time-of-arrival estimation and we employ the MIE to derive closed-form expressions of the begin-ambiguity, end-ambiguity and asymptotic signal-to-noise ratio (SNR) thresholds with respect to some features of the transmitted signal. Both baseband and passband pulses are considered. We prove that the begin-ambiguity threshold depends only on the shape of the envelope of the ACR, whereas the end-ambiguity and asymptotic thresholds only on the shape of the ACR. We exploit the results on the begin-ambiguity and asymptotic thresholds to optimize, with respect to the available SNR, the pulse that achieves the minimum attainable MSE. The results of this paper are valid for various estimation problems.
Outage Constrained Robust Transmit Optimization for Multiuser MISO Downlinks: Tractable Approximations by Conic Optimization
In this paper, we study a probabilistically robust transmit optimization problem under imperfect channel state information (CSI) at the transmitter and under the multiuser multiple-input single-output (MISO) downlink scenario. The main issue is to keep the probability of each user's achievable rate outage as caused by CSI uncertainties below a given threshold. As is well known, such rate outage constraints present a significant analytical and computational challenge. Indeed, they do not admit simple closed-form expressions and are unlikely to be efficiently computable in general. Assuming Gaussian CSI uncertainties, we first review a traditional robust optimization-based method for approximating the rate outage constraints, and then develop two novel approximation methods using probabilistic techniques. Interestingly, these three methods can be viewed as implementing different tractable analytic upper bounds on the tail probability of a complex Gaussian quadratic form, and they provide convex restrictions, or safe tractable approximations, of the original rate outage constraints. In particular, a feasible solution from any one of these methods will automatically satisfy the rate outage constraints, and all three methods involve convex conic programs that can be solved efficiently using off-the-shelf solvers. We then proceed to study the performance-complexity tradeoffs of these methods through computational complexity and comparative approximation performance analyses. Finally, simulation results are provided to benchmark the three convex restriction methods against the state of the art in the literature. The results show that all three methods offer significantly improved solution quality and much lower complexity.
Closed-Form Delay-Optimal Power Control for Energy Harvesting Wireless System With Finite Energy Storage
In this paper, we consider delay-optimal power control for an energy harvesting wireless system with finite energy storage. The wireless system is powered solely by a renewable energy source with bursty data arrivals, and is characterized by a data queue and an energy queue. We consider a delay-optimal power control problem and formulate an infinite horizon average cost Markov decision process (MDP). To deal with the curse of dimensionality, we introduce a virtual continuous time system and derive closed-form approximate priority functions for the discrete time MDP at various operating regimes. Based on the approximation, we obtain an online power control solution which is adaptive to the channel state information as well as the data and energy queue state information. The derived power control solution has a multi-level water-filling structure, where the water level is determined jointly by the data and energy queue lengths. In the simulations, we show that the proposed scheme has significant performance gain compared with various baselines.
Due to the insertion of cyclic prefix (CP), orthogonal-frequency-division multiplexing (OFDM) systems with offset quadrature amplitude modulation (OQAM) (OQAM-OFDM) and CP, denoted as CP-OQAM-OFDM, systems have more robustness to combat a multi-path fading channel and simpler and better receivers than the conventional OQAM-OFDM systems. In this paper, two channel estimators, i.e., weighted least square (WLS) and pairs of pilots (POP) are presented in CP-OQAM-OFDM systems. To evaluate the proposed WLS and POP estimators, the corresponding Cramér–Rao bounds are given for comparison. In addition, the computational complexities of the proposed estimators are analyzed as well. Simulation results demonstrate that both of the proposed channel estimators can achieve good bit error rate (BER) performance.
Multiuser interference, i.e., crosstalk, is the main bottleneck for digital subscriber lines (DSL) technology. Dynamic spectrum management (DSM) mitigates crosstalk by focusing on the multiuser power/frequency resource allocation problem, and it can provide formidable gains in performance. In this paper, we look at the DSM problem from a different perspective. We formulate the problem with the power allocation vectors defined with spherical coordinates, i.e., as a function of a radius and angles. We see that this reformulation permits us to exploit structure in the problem. We propose two algorithms. In the first of them, we use the fact that the DSM problem is concave in the radial dimension and perform an exhaustive search for the angles. The second algorithm uses a block coordinate descent approach, i.e., a sequence of line searches. We show that there is structure to be found in the radial dimension (it is always concave) and in the angle dimensions. For the latter, we provide conditions for the line searches to be concave or convex for each of the angles. The fact that we use structure leads to large savings in computational complexity. For example, we see that our first algorithm can be up to 60 times faster than a corresponding previously proposed algorithm. Our second algorithm is 2–15 times faster than a relevant previously proposed algorithm.
In this paper, we consider distributed estimation of an unknown random scalar by using wireless sensors and a fusion center (FC). We adopt a linear model for distributed estimation of a scalar source where both observation models and sensor operations are linear, and the multiple access channel (MAC) is coherent. We consider a fusion center with multiple antennas and single antenna. In order to estimate the source, best linear unbiased estimation (BLUE) is adopted. Two cases are considered: Minimization of the mean square error (MSE) of the BLUE estimator subject to network power constraint, and minimization of the network power subject to the quality of service (QOS). For a fusion center with multiple antennas, iterative solutions are provided and it is shown that the proposed algorithms always converge. For a fusion center with single antenna, closed-form solutions are provided, and it is shown that the iterative solutions will reduce to the closed-form solutions. Furthermore, the effect of noise correlation at the sensors and fusion center is investigated. It is shown that knowledge of noise correlation at the sensors will help to improve the system performance. Moreover, if correlation exists and not factored in, the system performance might improve depending on the correlation structure. We also show, by simulations, that when noise at the fusion center is correlated, even with knowing the correlation structure, the system performance degrades. Finally, simulations are provided to verify the analysis and present the performance of the proposed schemes.
In this paper, we propose a distributed algorithm for the estimation and control of the connectivity of ad-hoc networks in the presence of a random topology. First, given a generic random graph, we introduce a novel stochastic power iteration method that allows each node to estimate and track the algebraic connectivity of the underlying expected graph. Using results from stochastic approximation theory, we prove that the proposed method converges almost surely (a.s.) to the desired value of connectivity even in the presence of imperfect communication scenarios. The estimation strategy is then used as a basic tool to adapt the power transmitted by each node of a wireless network, in order to maximize the network connectivity in the presence of realistic medium access control (MAC) protocols or simply to drive the connectivity toward a desired target value. Numerical results corroborate our theoretical findings, thus illustrating the main features of the algorithm and its robustness to fluctuations of the network graph due to the presence of random link failures.
A practical orthogonal frequency-division multiplexing (OFDM) system can generally be modelled by the Hammerstein system that includes the nonlinear distortion effects of the high power amplifier (HPA) at transmitter. In this contribution, we advocate a novel nonlinear equalization scheme for OFDM Hammerstein systems. We model the nonlinear HPA, which represents the static nonlinearity of the OFDM Hammerstein channel, by a B-spline neural network, and we develop a highly effective alternating least squares algorithm for estimating the parameters of the OFDM Hammerstein channel, including channel impulse response coefficients and the parameters of the B-spline model. Moreover, we also use another B-spline neural network to model the inversion of the HPA's nonlinearity, and the parameters of this inverting B-spline model can easily be estimated using the standard least squares algorithm based on the pseudo training data obtained as a byproduct of the Hammerstein channel identification. Equalization of the OFDM Hammerstein channel can then be accomplished by the usual one-tap linear equalization as well as the inverse B-spline neural network model obtained. The effectiveness of our nonlinear equalization scheme for OFDM Hammerstein channels is demonstrated by simulation results.
Recently, in the context of covariance matrix estimation, in order to improve as well as to regularize the performance of the Tyler's estimator also called the Fixed-Point Estimator (FPE) , a “shrinkage” fixed-point estimator has been originally introduced in . First, this work extends the results of , by giving the general solution of the “shrinkage” fixed-point algorithm. Secondly, by analyzing this solution, called the generalized robust shrinkage estimator, we prove that this solution converges to a unique solution when the shrinkage parameter $beta$ (losing factor) tends to 0. This solution is exactly the FPE with the trace of its inverse equal to the dimension of the problem. This general result allows one to give another interpretation of the FPE and more generally, on the Maximum Likelihood approach for covariance matrix estimation when constraints are added. Then, some simulations illustrate our theoretical results as well as the way to choose an optimal shrinkage factor. Finally, this work is applied to a Space-Time Adaptive Processing (STAP) detection problem on real STAP data.
Deflation-based FastICA is a popular method for independent component analysis. In the standard deflation-based approach the row vectors of the unmixing matrix are extracted one after another always using the same nonlinearities. In practice the user has to choose the nonlinearities and the efficiency and robustness of the estimation procedure then strongly depends on this choice as well as on the order in which the components are extracted. In this paper we propose a novel adaptive two-stage deflation-based FastICA algorithm that (i) allows one to use different nonlinearities for different components and (ii) optimizes the order in which the components are extracted. Based on a consistent preliminary unmixing matrix estimate and our theoretical results, the algorithm selects in an optimal way the order and the nonlinearities for each component from a finite set of candidates specified by the user. It is also shown that, for each component, the best possible nonlinearity is obtained by using the log-density function. The resulting ICA estimate is affine equivariant with a known asymptotic distribution. The excellent performance of the new procedure is shown with asymptotic efficiency and finite-sample simulation studies.
In this paper, we consider multiuser multihop relay communication systems, where the users, relays, and the destination node may have multiple antennas. We address the issue of source and relay precoding matrices design to maximize the system mutual information (MI). By exploiting the link between the maximal MI and the weighted minimal mean-squared error (WMMSE) objective functions, we show that the intractable maximal MI-based source and relay optimization problem can be solved via the WMMSE-based source and relay design through an iterative approach which is guaranteed to converge to at least a stationary point. For the WMMSE problem, we derive the optimal structure of the relay precoding matrices and show that the WMMSE matrix at the destination node can be decomposed into the sum of WMMSE matrices at all hops. Under a (moderately) high signal-to-noise ratio (SNR) condition, this WMMSE matrix decomposition significantly simplifies the solution to the WMMSE problem. Numerical simulations are performed to demonstrate the effectiveness of the proposed algorithm.
To assess the risk of extreme events such as hurricanes, earthquakes, and floods, it is crucial to develop accurate extreme-value statistical models. Extreme events often display heterogeneity (i.e., nonstationarity), varying continuously with a number of covariates. Previous studies have suggested that models considering covariate effects lead to reliable estimates of extreme events distributions. In this paper, we develop a novel statistical model to incorporate the effects of multiple covariates. Specifically, we analyze as an example the extreme sea states in the Gulf of Mexico, where the distribution of extreme wave heights changes systematically with location and storm direction. In the proposed model, the block maximum at each location and sector of wind direction are assumed to follow the Generalized Extreme Value (GEV) distribution. The GEV parameters are coupled across the spatio-directional domain through a graphical model, in particular, a three-dimensional (3D) thin-membrane model. Efficient learning and inference algorithms are developed based on the special characteristics of the thin-membrane model. We further show how to extend the model to incorporate an arbitrary number of covariates in a straightforward manner. Numerical results for both synthetic and real data indicate that the proposed model can accurately describe marginal behaviors of extreme events.
The aim of this paper is to propose diffusion strategies for distributed estimation over adaptive networks, assuming the presence of spatially correlated measurements distributed according to a Gaussian Markov random field (GMRF) model. The proposed methods incorporate prior information about the statistical dependency among observations, while at the same time processing data in real time and in a fully decentralized manner. A detailed mean-square analysis is carried out in order to prove stability and evaluate the steady-state performance of the proposed strategies. Finally, we also illustrate how the proposed techniques can be easily extended in order to incorporate thresholding operators for sparsity recovery applications. Numerical results show the potential advantages of using such techniques for distributed learning in adaptive networks deployed over GMRF.