- Methodology article
- Open access
- Published:
Evaluating different methods of microarray data normalization
BMC Bioinformatics volume 7, Article number: 469 (2006)
Abstract
Background
With the development of DNA hybridization microarray technologies, nowadays it is possible to simultaneously assess the expression levels of thousands to tens of thousands of genes. Quantitative comparison of microarrays uncovers distinct patterns of gene expression, which define different cellular phenotypes or cellular responses to drugs. Due to technical biases, normalization of the intensity levels is a pre-requisite to performing further statistical analyses. Therefore, choosing a suitable approach for normalization can be critical, deserving judicious consideration.
Results
Here, we considered three commonly used normalization approaches, namely: Loess, Splines and Wavelets, and two non-parametric regression methods, which have yet to be used for normalization, namely, the Kernel smoothing and Support Vector Regression. The results obtained were compared using artificial microarray data and benchmark studies. The results indicate that the Support Vector Regression is the most robust to outliers and that Kernel is the worst normalization technique, while no practical differences were observed between Loess, Splines and Wavelets.
Conclusion
In face of our results, the Support Vector Regression is favored for microarray normalization due to its superiority when compared to the other methods for its robustness in estimating the normalization curve.
Background
DNA microarray technology is a powerful approach for genomic research, playing an increasingly important role in biomedical research. This technology yields simultaneous measurement of gene expression levels of thousands of genes, allowing the analysis of differential gene expression patterns under different conditions such as disease (pathological) states or treatment with different chemotherapeutic drugs. Due to small differences in RNA quantities and fluctuations generated by the technique, the intensity levels may vary from one replicate to the other due to effects which are unrelated to the genes, requiring data normalization before they can be compared.
Therefore, normalization is an important step for microarray data analysis. The purpose of data normalization is to minimize the effects caused by technical variations and, as a result, allow the data to be comparable in order to find actual biological changes. Several normalization approaches have been proposed, most of which derive from studies using two-color spotted microarrays. Some authors proposed normalization of the hybridization intensity ratios; others use global, linear methods, while others use local, non-linear methods. Several authors suggested using the spike-in controls, housekeeping genes, or invariant genes [1–7].
Recently, some authors suggested the use of non-linear normalization methods [8–10] which are believed to be superior to the above mentioned approaches. The locally weighed regression Lowess procedure [11] has been widely used for this purpose and implemented by several microarray analysis software packages [12, 13], but similar methods are suggested such as Splines [14, 15] and Wavelets [16].
Here, we compare three different well-known microarray data normalization methods, namely: Loess Regression (LR), Splines Smoothing (SS) and Wavelets Smoothing (WS). In addition, we propose two different normalization approaches, called Kernel Regression (KR) [17, 18] and Support Vector Regression (SVR) [19], which, to the best of our knowledge, have yet to be applied for microarray normalization. In order to assess the most appropriate normalization technique, benchmark studies were carried out using data derived from CodeLink™ mouse microarray experiments [20], generated at our Cell and Molecular Biology Laboratory (Chemistry Institute, University of São Paulo).
Results
We sought to highlight the performance of five different methods of microarray normalization, namely: Loess, Splines, Wavelets, Kernel and Support Vector Regression in a simulated microarray and in an actual CodeLinkâ„¢ microarray platform, which comprised ten thousand mouse genes. Although we have focused on the use of simulated two-color cDNA microarray data analysis, our discussions are also applicable to the single-color oligonucleotide microarrays.
The artificial microarrays composed by ten thousand spots were generated using the model proposed by Balagurunathan et al. (2002) [21]. The parameters used were: ( = 0, = 1001/0.7, = -0.7, = 1) and ( = 0, = 1001/0.9, = -0.9, = 1) for sinusoid shape, ( = 0, = 500 = -1, = 1) and ( = 0, = 10, = -1, = 1) for banana shape and, ( = 0, = 10, = -1, = 1) and ( = 0, = 1001/0.7, = -0.7, = 1) for mixed shape. Gene expression was generated by an exponential distribution with parameter λ = 1/3000 and the outliers were generated by a Beta distribution with parameters B(1.7,4.8). For more details, see Balagurunathan et al. (2002).
The smoothing parameters used in each dataset are described in Table 1. For SVR, we tested a range of values and, as a result, we selected ε = 0.01 and C = 4 as the most adequate one. It is important to highlight that the parameters are arbitrary; therefore, we chose the optimum parameters for each method, i.e., the one which resulted in the lowest mean square error. In Figure 1 are described the mean square errors for each normalization method applied to three different simulated microarrays with no outliers.
In order to compare the perturbation caused by the presence of outliers and the robustness of each normalization method, we randomly inserted 5, 10, 15, 20 and 40% of outliers (genes which display very high differential expression) at three different expression levels (low, medium, high), and the respective mean square errors between the regression curve and the actual curve (the function from which the microarray was generated) was calculated. This step was repeated 100 times to estimate the average sum of the squared errors and their variance. The Wilcoxon and the Kolmogorov-Smirnov tests were performed in order to determine whether the five regression methods differ from one another in any significant manner.
A high performance normalization technique should yield unbiased corrections and corrections with the smallest standard deviation.
Comparison of the results presented in Table 2, 3 and 4 shows no important difference between LR, SS and WS. Although the non-parametric KR method has been successfully applied in econometrics data analysis [22], it displayed a poor performance for microarray normalization, probably because it is highly sensitive to outliers [23].
Upon analyzing Table 2, it is possible to observe, in the case of sinusoid shape, that when outliers are inserted in regions of low gene expression, SVR, WS, SS, LR and KR, in this order, have the lowest to the highest mean square error, being statistically different (p value < 0.001) from one another. For the banana and mixed shapes, LR and SS presented a lower MSE than WS. In Table 3, it is interesting to note that when outliers are inserted in regions of medium gene expression, i.e., high density of genes, the order of performance remains the same as in Table 2 and SVR displays a mean square error which is significantly different from the others (p value < 0.001). LR and SS showed no significant difference (p value > 0.05) and KR is significantly worse than the other methods (p value < 0.001). In Table 4, the outliers are inserted in a high gene expression region. Once more, the trend is maintained, namely, KR is the most affected by outliers (p value < 0.001) and no differences between SS and WS (p value > 0.05) were observed for the sinusoid shape. For the other two shapes, LR and SS were better than WS (p value < 0.001).
In all three cases (outliers at low, medium and high gene expression), SVR is the affected by outliers (p value < 0.001), independently the microarray's shape. In addition, SVR yields the smallest standard deviation, followed by LR, SS, WS, with KR displaying the largest deviation. In addition, the five methods were applied to actual microarray data, with outliers inserted artificially, and the results were the same when compared to those obtained from artificial microarray experiments.
In Figure 2, we illustrate the performance of the five normalization methods applied to actual microarray data, without the insertion of artificial outliers. A small difference could be observed in the normalization curves in which the genes displayed low and high expression, due to the low quantity of genes and the high variance.
Discussion
By analyzing the extent to which the outliers could disturb the regression curve, we observed that KR is more highly sensitive to outliers than LR, SS and WS in all three cases (outliers in low, medium and high expression). In all three cases, SVR is shown to be the least affected.
The superior performance of Splines, when compared to KR, may be explained by the degree of smoothing, which varies according to the density of points, differently from KR, which has a fixed window size. Wavelet also has a slightly better performance than KR, probably due to multi-resolution properties. In general, SS and WS presented similar performance when we compared the median of the mean square error using the Wilcoxon test. However, when we used the Kolmogorov-Smirnov test, they presented a statistically significant difference (p value < 0.001). SS and WS constitute somewhat better normalization techniques than LR when we analyzed the sinusoid shape, but, for the other two shapes, LR is better than SS and WS. For practical purposes, the differences between them in terms of disturbance by outliers are too small to be of any concern.
The SVR method is shown to be very robust to outliers presented at different gene expression levels, becoming the best normalization technique to identify actual differentially expressed genes.
One well-known problem in identifying differentially expressed genes is normalizing genes displaying low expression levels, due to the low quantity of the corresponding transcripts and the high spot intensity variance. An equivalent problem occurs with genes presenting very high expression levels due to the low frequency of these genes. Once more, under these conditions, the SVR method is shown to be better than other currently used methods.
We performed the same tests for five other pairs of CodeLinkâ„¢ microarrays and the results obtained were the same: the SVR is the most robust to outliers and the KR method is the worst method, being highly sensitive to differentially expressed genes and yielding poor regression curves.
Other methods, which are also robust to outliers and are based on a new regression method called two-way semi-linear model [24–27] have also been applied for microarray data normalization. This new approach developed in the last few years, deserves further studies, which we are planning to undertake in the future.
Conclusion
We have proposed a new approach to normalize microarray data and tested this SVR method by benchmark studies and by several simulations. The results obtained with SVR were superior than those obtained with some widely used normalization techniques such as LR, SS and WS. SVR is shown to be more robust to outliers even at very low and very high gene expression levels, being useful to identify differentially expressed genes. Even tested in different microarray shapes, SVR was superior to the other methods, while LR, SS and WS presented similar performances. Therefore, we have demonstrated that SVR is feasible and very promising for microarray data normalization.
Methods
Simulation
The program which generates the artificial microarray and the analyses were implemented in R, a language for statistical computing [28]. This script may be downloaded at: [29].
CodeLinkâ„¢ microarray
Cell lysis and RNA extraction
Cell cultures were lysed with guanidine isocyanate and RNA was purified by of the cell lysates on a cesium chloride cushion (Chirgwin el al, 1979). Absorbance ratio at 260/280 nm was used to assess the RNA purity, a ratio of 1.8 – 2.0 indicating adequate purity.
Labeling and purification of targets
RNA samples were prepared and processed according to protocols supplied by the manufacturer (Amersham Biosciences). Briefly, cDNAs were synthesized from purified RNA (2 μg) and control bacterial mRNAs. Samples were purified using the QIAquick Spin kit (Qiagen) and concentrated by SpeedVac. Concentrated pellets were used in a biotinylated-UTP based cRNA synthesis using the CodeLink™ Expression Assay Reagent Kit (Amersham). Labeled cRNAs were purified using the RNeasy kit (Qiagen) and fragmented with supplied solution at 94°C for 20 min.
Hybridization and washing of arrays
Fragmented biotin-labeled cRNAs (10 μg) were incubated with CodeLink™ bioarrays and shaken (300 rpm) for 20 h. The bioarrays were then washed and incubated with Cy5-Streptavidin (30 min). Scanning of the bioarrays was performed in a GenePix 4000 B Array Scanner (Axon Instruments) and the data were collected using the CodeLink™ System Software (Amersham), which provided the raw data and invalidated data from irregular spots.
Loess regression
Consider we have n measurements, for each of which the response expected is y i and let x i be the predictor, where x is the log intensity of one microarray and y is the log intensity of the other one, in case we are analyzing a single-color microarrays. Whether the microarray is a two-color platform, x is the log of one dye intensity and y is the log of the other dye intensity.
In this model, they are supposed to be related by
y i = g(x i ) + ε i (1)
where g is the regression function and ε i is a random error. The idea of local regression is that near x = x0, the regression function g(x) can be locally approximated by the value of a function in some specified parametric class. Such a local approximation is obtained by fitting a regression surface to the data points within a chosen neighborhood of the point x0.
In this method, weighed least squares are used to fit linear or quadratic functions of the predictors at the centers of the neighborhoods. The radius of each neighborhood is chosen so that the neighborhood contains a specified percentage of the data points. The fraction of the data, called the smoothing parameter, in each local neighborhood is weighted by a smooth decreasing function of their distance from the center of the neighborhood [30].
B-Splines smoothing
Due to its simple structure and good approximation properties, polynomials are widely used in practice for approximating functions [31, 32]. Let x and y as defined above and
and
Therefore, let
... ≤ y-1 ≤ y0 ≤ y1 ≤ y2 ≤ ... (4)
be a sequence of real numbers. Given integers i and m > 0, we define
for all real x. We call the m th order B-Spline associated with the knots y i ,..., yi + m.
For m = 1, the B-Spline associated with y i <yi + 1is particularly simple. It is the piecewise constant function
In our analysis, we applied the cubic Splines, i.e., Splines of order 3.
We can also give explicit formulate for in case either y i or yi + mis a knot of multiplicity m.
Wavelet smoothing
The Wavelet transform is a relatively new approach and has some similarities with the Fourier transform. Wavelets differ from Fourier methods in that they allow the localization of a signal in both time and frequency. In the wavelet theory, a function is represented by an infinite series expansion in terms of dilated and translated version of a basic function ψ called the "mother' Wavelet. A Wavelet transformation leads to an additive decomposition of a signal into a series of different components describing smooth and rough features of the signal.
The term Wavelets means small curves, therefore, they are oscillations that rapidly decay. As the B-Splines functions system, the Wavelets functions ψ(t) can be used to generate a function basis for certain spaces [33]. An ortonormal basis can be generated by dyadic dilations and translations of a mother Wavelet ψ(t), by
ψ j, k (t) = 2j/2ψ (2jj - k), j, k ∈ Z (7)
Wavelets are functions which satisfy the following properties:
-
i)
(1)
dt = 0 (8).
ii) dt < ∞ (9).
iii) , where the function Ψ(ω) is the Fourier transform of ψ(t) (10).
iv) , j = 0,1,..., r - 1 for r ≥ 1 and (11).
An important result is that any function f (t) with can be expanded as
In other words, any function f (t) can be represented by a linear combination of functions ψ j, k (t). The smoothing procedure can be carried out by an approximation, choosing a maximum resolution J (t) for j =1,2,..., J (t) and k = 1,2,..., 2j-1. Here, we considered the Mexican hat Wavelet [34] defined by
rather than other functions such as Morlet or Shannon since they do not have an analytic formula.
The C jk coefficients are estimated via an ordinary least square regression. An important feature in the wavelets representation is that it allows the description of functions belonging to both Sobolev and Besov spaces [35].
Kernel regression
KR is one class of modeling methods that belongs to the smoothing methods family. It is part of the non-parametric regression methods. KR allows basing the prediction of a value on passed observations, and weighing the impact of past observations depending on how similar they are, compared to the current values of the explanatory variables.
The KR is one of the most widely used procedures in non-parametric curve estimation. Nadaraya (1964) and Watson (1964) proposed an estimator for the curve g given by
In our datasets, we used the Gaussian Kernel because it is symmetric and centralized in the mean.
In addition to being easy to compute, the Nadaraya-Watson estimator g h (x) is consistent. When h → 0 the estimated curve presents a large variability and when nh → ∞, we obtain an overly smooth curve [36]. The bandwidth h controls the smoothness degree of the estimated curve. It is easy to observe that this KR estimator is just a weighted sum of the observed responses Y i . The denominator ensures that the weights sum up to 1.
Support Vector Regression
SVR generalized algorithm is a non-linear regression from the Generalized Portrait algorithm developed in Russia by Vapnik and Lerner (1963) [37] and Vapnik and Chervonenkis (1964) [38]. It is based upon the statistical learning theory which has been developed by Vapnik and Chervonenkis (1974) [39]. In Bioinformatics, and, more specifically, in microarray data analysis, to the best of our knowledge, this algorithm has previously been used only once, by Hisanori et al. (2004), to extract relations between promoter sequences and strengths [40]. Here, we propose the use of SVR to normalize microarray data.
Let {(x1, y1),..., (x1, y1)} ⊂ R × R be the gene expression data derived from microarray experiments, where x is the log intensity of one microarray and y is the log intensity of the other one, in case we are analyzing a single-color microarrays. When the microarray is a two-color platform, x is the log of one dye intensity and y is the log of the other dye intensity. In ε-SVR [41], the goal is to obtain a function f (x) that has at the most ε deviation from the y i for all the data, and is as flat as possible.
In the case of linear functions f :
f (x) = (wtx) + b with w ∈ Rn, b ∈ R (15)
Flatness in (15) means
Minimize
In (16) there is a function f which, with ε precision, approximates all pairs (xi, y i ). But there are cases where it is necessary to allow for some errors. To solve this problem, one can introduce slack variables ξ i , to deal with unfeasible constraints of the optimization problem (16) arriving at the formulation stated in [41]
Minimize + C ∑(ξ i + )
where the constant C > 0 is the trade-off between the amount up to which deviations larger than ε are tolerated, maintaining the flatness of f. This corresponds to dealing with the ε-insensitive loss function |ξ| ε :
It is necessary to construct a Lagrange function from the primal objective function and the corresponding constraints by introducing a dual set of variables. According to Mangasarian (1969) [42], McCormick (1983) [43], and Vanderbei (1997) [44] it follows that:
where L is the Lagrangian and η i , , α i , are Lagrange multipliers. Hence the dual variables in (19) have to satisfy
Note that we refer to α i and as .
From the saddle point condition, the partial derivatives of L related to (w, b, ξ i , ) have to vanish for optimality.
From the substitution of (21), (22) and (23) into (19) we obtain a dual optimization problem.
Subject to and α i , ∈ [0, C]
Equation (22) can be rewritten as follows
, thus (25)
This is the Support Vector expansion, i.e., the description of w as a linear combination of x i .
To compute b, it is necessary to use Karush-Kuhn-Tucker (KKT) conditions [45, 46]. These authors state that at the point of the solution the product between dual variables and constraints has to vanish.
α i (ε + ξ i - y i + (wtx i ) + b) = 0 (26)
(ε + + y i - (wtx i ) - b) = 0
and
(C - α i )ξ i = 0 (27)
(C - ) = 0
From (26) and (27) it follows that:
-
(i)
Only samples (x i , y i ) with corresponding = C lie outside the ε-insensitive tube;
-
(ii)
α i = 0
From (i) and (ii), it is possible to conclude that
ε - y i + (wtx i ) + b ≥ 0 and ξ i = 0 if α i <C (28)
ε - y i + (wtx i ) + b ≤ 0 if α i > 0 (29)
In conjunction with an analogous analysis on
max{-ε + y i - (wtx i )|α i <C or > 0} ≤ b ≤ min{-ε + y i - (wtx i )|α i > 0 or <C} (30)
If some ∈ (0, C) the inequalities become equalities.
To point out the sparsity of the SV expansion: from (26), the Lagrange multipliers may be nonzero only for |f (x i ) - y i | ≥ ε.
Therefore, we have a sparse expansion of w in terms of x i [47].
Abbreviations
- LR:
-
Loess Regression
- SS:
-
Splines Smoothing
- WS:
-
Wavelets Smoothing
- KR:
-
Kernel Regression
- SVR:
-
Support Vector Regression
References
Quackenbush J: Microarray data normalization and transformation. Nat Genet 2002, 32: 496 -501. 10.1038/ng1032
Cullane AC, Perriere G, Considine EC, Cotter TG, Higgins DG: Between-group analysis of microarray data. Bioinformatics 2002, 18: 1600–1608. 10.1093/bioinformatics/18.12.1600
Durbin BP, Hardin JS, Hawkins DM, Rocke DM: A variance-stabilizing transformation for gene-expression microarray data. Bioinformatics 2002, 18: S105–110.
Kepler TB, Crosby L, Morgan KT: Normalization and analysis of DNA microarray data by self-consistency and local regression. Genome biol 2002, 3: RESEARCH0037. 10.1186/gb-2002-3-7-research0037
Yang IV, Chen E, Hasseman JP, Liang W, Frank BC, Wang S, Sharov V, Saeed AI, White J, Li J, Lee NH, Yeatman TJ, Quackenbush J: Within the fold: assessing differential expression measures and reproducibility in microarray assays. Genome Biol 2002, 3: research0062.
Schadt EE, Li C, Ellis B, Wong WH: Feature extraction and normalization algorithms for high-density oligonucleotide gene expression array data. J Cell Biochem 2001, 37: 120–125. 10.1002/jcb.10073
Hill AA, Brown EL, Whitley MZ, Tucker-Kellogg G, Hunter CP, Slonim DK: Evaluation of normalization procedures for oligonucleotide array data based on spiked cRNA controls. Genome Biol 2001, 2: RESEARCH0055. 10.1186/gb-2001-2-12-research0055
Yang YH, Speed T: Design issues for cDNA microarray experiments. Nat Rev Genet 2002, 3: 579–588.
Perou CM: Show me the data! Nat Genet 2001, 29: 373. 10.1038/ng1201-373
Brazma A, Hingamp P, Quackenbush J, Sherlock G, Spellman P, Stoeckert C, Aach J, Ansorge W, Ball CA, Causton HC, Gaasterland T, Glenisson P, Holstege FC, Kim IF, Markowitz V, Matese JC, Parkinson H, Robinson A, Sarkans U, Schulze-Kremer S, Stewart J, Taylor R, Vilo J, Vingron M: Minimum information about microarray experiment (MIAME)-toward standards for microarray data. Nat Genet 2001, 29: 365–371. 10.1038/ng1201-365
Yang YH, Dudoit S, Luu P, Lin DM, Peng V, Ngai J, Speed TP: Normalization for cDNA microarray data: a robust composite method addressing single and multiple slide systematic variation. Nucleic Acids Res 2002, 30: e15. 10.1093/nar/30.4.e15
Beheshti B, Braude I, Marrano P, Thorner P, Zielenska M, Squire JA: Chromosomal localization of DNA amplifications in neuroblastoma tumors using cDNA microarray comparative genomic hybridization. Neoplasia 2003, 5: 53–62.
Saeed AI, Sharov V, White J, Li J, Liang W, Bhagabati N, Braisted J, Klapa M, Currier T, Thiagarajan M, Sturn A, Snuffin M, Rezantsev A, Popov D, Ryltsov A, Kostukovich E, Borisovsky I, Liu Z, Vinsavich A, Trush V, Quackenbush J: TM4: a free, open-source system for microarray data management and analysis. Biotechniques 2003, 34: 374–378.
Baird D, Johnstone P, Wilson T: Normalization of microarray data using a spatial mixed model analysis which includes splines. Bioinformatics 2004, 17: 3196–205. 10.1093/bioinformatics/bth384
Workman C, Jensen LJ, Jarmer H, Berka R, Gautier L, Nielsen HB, Saxild HH, Nielsen C, Brunak S, Knudsen S: A new non-linear normalization method for reducing variability in DNA microarray experiments. Genome Biology 2002, 3(9):research0048.1–0048.16. 10.1186/gb-2002-3-9-research0048
Wang J, Ma JZ, Li MD: Normalization of cDNA microarray data using wavelet regressions. Combinatorial Chemistry & High Throughput Screening 9: 783–791.
Nadaraya EA: On estimating regression. Theory of probability and its applications 1964, 10: 186–190. 10.1137/1110024
Watson GS: Smooth regression analysis. Sankya A 1964, 26: 359–372.
Vapnik VN: The Nature of Statistical Learning Theory. Springer 1995.
Ramakrishnan R, Dorris D, Lublinsky A, Nguyen A, Domanus M, Prokhorova A, Gieser L, Touma E, Lockner R, Tata M, Zhu X, Patterson M, Shippy R, Sendera TJ, Mazumder A: An assessment of Motorola CodeLinkâ„¢ microarray performance for gene expression profiling applications. Nucleic Acids Research 2002., 30:
Balagurunathan Y, Dougherty ER, Chen Y, Bittner ML, Trent JM: Simulation of cDNA microarrays via a parameterized random signal model. Journal of Biomedical Optics 2002, 7(3):507–523. 10.1117/1.1486246
Dias R: A review of non-parametric curve estimation methods with application to Econometrics. Economia 2002, 2: 31–75.
Archambeau C: Probabilistic models in noisy environment – and their application to a visual prosthesis for the blind. PhD thesis. Universite catholique de Louvain, Applied Sciences Faculty; 2005.
Fan J, Tam P, Vande WG, Ren Y: Normalization and analysis of cDNA microarrays using within-array replications applied to neuroblastoma cell response to a cytokine. PNAS 2004, 101: 1135–1140. 10.1073/pnas.0307557100
Fan J, Peng H, Huang T: Semilinear high-dimensional model for normalization of microarray data: a theoretical analysis and partial consistency. J Am Stat Assoc 2005, 100(471):781–813. 10.1198/016214504000001781
Huang J, Wang D, Zhang C: A two-way semi-linear model for normalization and analysis of cDNA microarray data. J Am Stat Assoc 2005, 100(471):814–829. 10.1198/016214504000002032
Wang D, Huang J, Xie H, Manzella L, Soares MB: A robust two-way semi-linear modelo for normalization of cDNA microarray data. BMC Bioinformatics 2005., 6(14):
The R project for statistical computing[http://www.r-project.org]
Evaluating different methods of microarray data normalization[http://mariwork.iq.usp.br/normalization/]
Cleveland WS, Grosse E, Shyu WM: Local regression models. In Chapter 8 Statistical Models in S. Wadsworth & Brooks/Cole Edited by: Chambers JM, Hastie TJ. 1992.
Schumaker LL: Spline functions basic theory. New York: John Wiley & Sons; 1981.
Prenter PM: Splines and variational methods. New York: John Wiley & Sons; 1975.
Meyer Y: Wavelets Algorithms and Applications. Philadelphia: SIAM; 1993.
Chui CK: An introduction to wavelets. San Diego: Academic Press; 1992.
Härdle W: Smoothing techniques with implementation. New York: Springer-Verlag; 1990.
Donoho DL, Johnstone IM: Minimax estimation via wavelet shrinkage. Annals of Statistics 1998, 26: 879–921. 10.1214/aos/1024691081
Vapnik V, Lerner A: Pattern recognition using generalized portrait method. Automatic and Remote Control 1963, 24: 774–780.
Vapnik V, Chervonenkis A: A note on one class of perceptrons. Automatics and Remote Control 1964, 25.
Vapnik V, Chervonenkis A: Theory of pattern recognition. Moskow: Nauka; 1974.
Hisanori K, Oshima T, Asai K: Extracting relations between promoter sequences and their strengths from microarray data. Bioinformatics 2004, 21: 1062–1068. 10.1093/bioinformatics/bti094
Vapnik VN: Statistical Learning Theory. New York: Wiley; 1998.
Mangasarian OL: Nonlinear Programming. New York: McGraw-Hill; 1969.
McCormick GP: Nonlinear Programming Theory Algorithms and Applications. New York: John Wiley and Sons; 1983.
Vanderbei RJ: An interior point code for quadratic programming. In Statistics and Operations Research. Princeton Univ., NJ; 1997.
Karush W: Minima of functions of several variables with inequalities as side constraints. In Master thesis. University of Chicago Department of Mathematics; 1939.
Kuhn HW, Tucher AW: Proceedings of the 2nd Berkeley Symposium on Mathematical Statistics and Probabilistics. Berkeley University of California Press; 1951:481–492.
Smola AJ, Schölkopf B: A tutorial on support vector regression. Statistics and Computing 2004, 14: 199–222. 10.1023/B:STCO.0000035301.49549.88
Acknowledgements
This research was supported by FAPESP, CAPES, CNPq, FINEP and PRP-USP.
Author information
Authors and Affiliations
Corresponding author
Additional information
Authors' contributions
AF – has made substantial contributions to conception and design of the study, analysis and interpretation of data and has been involved in drafting of the manuscript.
JRS – has made substantial contributions to conception and design of the study, analysis and interpretation of data and has been involved in drafting of the manuscript.
LOR – acquisition of the benchmark data and has been involved in drafting parts of the manuscript.
CEF – has discussed the results and critically revised the manuscript for important intellectual content and has given the final approval of the version to be published.
MCS – has directed the work on differentially expressed genes using the CodeLink™ platform and critically revised the manuscript for important intellectual content and has given the final approval of the version to be published.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Fujita, A., Sato, J.R., Rodrigues, L.d.O. et al. Evaluating different methods of microarray data normalization. BMC Bioinformatics 7, 469 (2006). https://doi.org/10.1186/1471-2105-7-469
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1471-2105-7-469
Comments
View archived comments (1)