Journal of Machine Learning Research 1 (2000) x-xx Submitted 4/00; Published 10/00 
Lasso Screening Rules via Dual Polytope Projection 
Jie Wang jie.wang.ustc@asu.edu 
Peter Wonka Peter.Wonka@asu.edu 
Jieping Ye jieping.ye@asu.edu 
Department of Computer Science and Engineering 
Arizona State University 
Tempe, AZ 85287-8809, USA 
Editor: 
Abstract 
Lasso is a widely used regression technique to
nd sparse representations. When the di-mension 
of the feature space and the number of samples are extremely large, solving the 
Lasso problem remains challenging. To improve the eciency of solving large-scale Las-so 
problems, El Ghaoui and his colleagues have proposed the SAFE rules which are able 
to quickly identify the inactive predictors, i.e., predictors that have 0 components in the 
solution vector. Then, the inactive predictors or features can be removed from the op-timization 
problem to reduce its scale. By transforming the standard Lasso to its dual 
form, it can be shown that the inactive predictors include the set of inactive constraints 
on the optimal dual solution. In this paper, we propose an ecient and eective screening 
rule via Dual Polytope Projections (DPP), which is mainly based on the uniqueness and 
nonexpansiveness of the optimal dual solution due to the fact that the feasible set in the 
dual space is a convex and closed polytope. Moreover, we show that our screening rule 
can be extended to identify inactive groups in group Lasso. To the best of our knowledge, 
there is currently no exact screening rule for group Lasso. We have evaluated our screening 
rule using synthetic and real data sets. Results show that our rule is more eective in 
identifying inactive predictors than existing state-of-the-art screening rules for Lasso. 
Keywords: Lasso, Safe Screening, Sparse Regularization, Polytope Projection, Dual 
Formulation 
1. Introduction 
Data with various structures and scales comes from almost every aspect of daily life. To 
eectively extract patterns in the data and build interpretable models with high prediction 
accuracy is always desirable. One popular technique to identify important explanatory 
features is by sparse regularization. For instance, consider the widely used `1-regularized 
least squares regression problem known as Lasso (Tibshirani, 1996). The most appealing 
property of Lasso is the sparsity of the solutions, which is equivalent to feature selection. 
Suppose we have N observations and p features. Let y denote the N dimensional response 
vector and X = [x1; x2; : : : ; xp] be the N p feature matrix. Let   0 be the regularization 
parameter. The Lasso problem is formulated as the following optimization problem: 
inf
2Rp 
1 
2 
ky  X
k2 
2 + k
k1: (1) 

c 2000 Jie Wang, Peter Wonka and Jieping Ye.
Wang, Wonka and Ye 
Lasso has achieved great success in a wide range of applications (Chen et al., 2001; Candes, 
2006; Zhao and Yu, 2006; Bruckstein et al., 2009; Wright et al., 2010) and in recent years 
many algorithms have been developed to eciently solve the Lasso problem (Efron et al., 
2004; Kim et al., 2007; Park and Hastie, 2007; Donoho and Tsaig, 2008; Friedman et al., 
2007; Becker et al., 2010; Friedman et al., 2010). However, when the dimension of feature 
space and the number of samples are very large, solving the Lasso problem remains chal-lenging 
because we may not even be able to load the data matrix into main memory. The 
idea of screening has been shown very promising in solving Lasso for large-scale problems. 
Essentially, screening aims to quickly identify the inactive features that have 0 components 
in the solution and then remove them from the optimization. Therefore, we can work on a 
reduced feature matrix to solve the Lasso problem, which may lead to substantial savings 
in computational cost and memory usage. 
Existing screening methods for Lasso can be roughly divided into two categories: the 
Heuristic Screening Methods and the Safe Screening Methods. As the name indicated, 
the heuristic screening methods can not guarantee that the discarded features have zero 
coecients in the solution vector. In other words, they may mistakenly discard the active 
features which have nonzero coecients in the sparse representations. Well-known heuristic 
screening methods for Lasso include SIS (Fan and Lv, 2008) and strong rules (Tibshirani 
et al., 2012). SIS is based on the associations between features and the prediction task, 
but not from an optimization point of view. Strong rules rely on the assumption that the 
absolute values of the inner products between features and the residue are nonexpansive 
(Bauschke and Combettes, 2011) with respect to the parameter values. Notice that, in 
real applications, this assumption is not always true. In order to ensure the correctness of 
the solutions, strong rules check the KKT conditions for violations. In case of violations, 
they weaken the screened set and repeat this process. In contrast to the heuristic screening 
methods, the safe screening methods for Lasso can guarantee that the discarded features 
are absent from the resulting sparse models. Existing safe screening methods for Lasso 
includes SAFE (El Ghaoui et al., 2012) and DOME (Xiang et al., 2011; Xiang and Ramadge, 
2012), which are based on an estimation of the dual optimal solution. The key challenge 
of searching for eective safe screening rules is how to accurately estimate the dual optimal 
solution. The more accurate the estimation is, the more eective the resulting screening 
rule is in discarding the inactive features. Moreover, Xiang et al. (2011) have shown that 
the SAFE rule for Lasso can be read as a special case of their testing rules. 
In this paper, we develop novel ecient and eective screening rules for the Lasso prob-lem; 
our screening rules are safe in the sense that no active features will be discarded. As 
the name indicated (DPP), the proposed approaches heavily rely on the geometric proper-ties 
of the Lasso problem. Indeed, the dual problem of problem (1) can be formulated as 
a projection problem. More speci
cally, the dual optimal solution of the Lasso problem is 
the projection of the scaled response vector onto a nonempty closed and convex polytope 
(the feasible set of the dual problem). This nice property provides us many elegant ap-proaches 
to accurately estimate the dual optimal solutions, e.g., nonexpansiveness,
rmly 
nonexpansiveness (Bauschke and Combettes, 2011). In fact, the estimation of the dual 
optimal solution in DPP is a direct application of the nonexpansiveness of the projection 
operators. Moreover, by further exploiting the properties of the projection operators, we 
can signi
cantly improve the estimation of the dual optimal solution. Based on this esti- 
2
Lasso Screening Rules via Dual Polytope Projection 
mation, we develop the so called enhanced DPP (EDPP) rules which are able to detect far 
more inactive features than DPP. Therefore, the speedup gained by EDPP is much higher 
than the one by DPP. 
In real applications, the optimal parameter value of  is generally unknown and needs 
to be estimated. To determine an appropriate value of , commonly used approaches such 
as cross validation and stability selection involve solving the Lasso problems over a grid of 
tuning parameters 1  2  : : :  K. Thus, the process can be very time consuming. 
To address this challenge, we develop the sequential version of the DPP families. Brie
y 
speaking, for the Lasso problem, suppose we are given the solution
(k1) at k1. We 
then apply the screening rules to identify the inactive features of problem (1) at k by 
making use of
(k1). The idea of the sequential screening rules is proposed by El Ghaoui 
et al. (2012) and Tibshirani et al. (2012) and has been shown to be very eective for the 
aforementioned scenario. In Tibshirani et al. (2012), the authors demonstrate that the 
sequential strong rules are very eective in discarding inactive features especially for very 
small parameter values and achieve the state-of-the-art performance. However, in contrast 
to the recursive SAFE (the sequential version of SAFE rules) and the sequential version 
of DPP rules, it is worthwhile to mention that the sequential strong rules may mistakenly 
discard active features because they are heuristic methods. Moreover, it is worthwhile to 
mention that, for the existing screening rules including SAFE and strong rules, the basic 
versions are usually special cases of their sequential versions, and the same applies to our 
DPP and EDPP rules. For the DOME rule (Xiang et al., 2011; Xiang and Ramadge, 2012), 
it is unclear whether its sequential version exists. 
The rest of this paper is organized as follows. We present the family of DPP screening 
rules, i.e., DPP and EDPP, in detail for the Lasso problem in Section 2. Section 3 extends 
the idea of DPP screening rules to identify inactive groups in group Lasso (Yuan and 
Lin, 2006). We evaluate our screening rules on synthetic and real data sets from many 
dierent applications. In Section 4, the experimental results demonstrate that our rules are 
more eective in discarding inactive features than existing state-of-the-art screening rules. 
We show that the eciency of the solver can be improved by several orders of magnitude 
with the enhanced DPP rules, especially for the high-dimensional data sets (notice that, 
the screening methods can be integrated with any existing solvers for the Lasso problem). 
Some concluding remarks are given in Section 5. 
2. Screening Rules for Lasso via Dual Polytope Projections 
In this section, we present the details of the proposed DPP and EDPP screening rules for 
the Lasso problem. We
rst review some basics of the dual problem of Lasso including its 
geometric properties in Section 2.1; we also brie
y discuss some basic guidelines for devel-oping 
safe screening rules for Lasso. Based on the geometric properties discussed in Section 
2.1, we then develop the basic DPP screening rule in Section 2.2. As a straightforward ex-tension 
in dealing with the model selection problems, we also develop the sequential version 
of DPP rules. In Section 2.3, by exploiting more geometric properties of the dual problem 
of Lasso, we further improve the DPP rules by developing the so called enhanced DPP 
(EDPP) rules. The EDPP screening rules signi
cantly outperform DPP rules in identifying 
the inactive features for the Lasso problem. 
3
Wang, Wonka and Ye 
2.1 Basics 
Dierent from Xiang et al. (2011); Xiang and Ramadge (2012), we do not assume y and all 
xi have unit length.The dual problem of problem (1) takes the form of (to make the paper 
self-contained, we provide the detailed derivation of the dual form in the appendix): 
sup 
 
 
1 
2 
kyk22 
 
2 
2 


 
  
y 
 



 
2 
2 
: jxTi 
j  1; i = 1; 2; : : : ; p 
 
; (2) 
where  is the dual variable. For notational convenience, let the optimal solution of problem 
(2) be () [recall that the optimal solution of problem (1) with parameter  is denoted 
by
()]. Then, the KKT conditions are given by: 
y = X
() + (); (3) 
xTi 
() 2 
( 
sign([
()]i); if [
()]i6= 0; 
[1; 1]; if [
()]i = 0; 
i = 1; : : : ; p; (4) 
where []k denotes the kth component. In view of the KKT condition in (4), we have 
jxTi 
(())T j  1 ) [
()]i = 0 ) xi is an inactive feature. (R1) 
In other words, we can potentially make use of (R1) to identify the inactive features for the 
Lasso problem. However, since () is generally unknown, we can not directly apply (R1) 
to identify the inactive features. Inspired by the SAFE rules (El Ghaoui et al., 2012), we 
can
rst estimate a region  which contains (00). Then, (R1) can be relaxed as follows: 
sup 
2 
jxTi 
j  1 ) [
()]i = 0 ) xi is an inactive feature. (R1') 
Clearly, as long as we can
nd a region  which contains (), (R1') will lead to a screening 
rule to detect the inactive features for the Lasso problem. Moreover, in view of (R1) and 
(R1'), we can see that the smaller the region  is, the more accurate the estimation of () 
is. As a result, more inactive features can be identi
ed by the resulting screening rules. 
Geometric Interpretations of the Dual Problem By a closer look at the dual 
problem (2), we can observe that the dual optimal solution is the feasible point which is 
closest to y=. For notational convenience, let the feasible set of problem (2) be F. Clearly, 
F is the intersection of 2p closed half-spaces, and thus a closed and convex polytope. (Notice 
that, F is also nonempty since 0 2 F.) In other words, () is the projection of y= onto 
the polytope F. Mathematically, for an arbitrary vector w and a convex set C in a Hilbert 
space H, let us de
ne the projection operator as 
PC(w) = argmin 
u2C 
ku  wk2: (5) 
Then, the dual optimal solution () can be expressed by 
() = PF (y=) = argmin 
2F 


 
  
y 
 



 
2 
: (6) 
4
Lasso Screening Rules via Dual Polytope Projection 
Indeed, the nice property of problem (2) illustrated by Eq. (6) leads to many interesting 
Ti 
results. For example, it is easy to see that y= would be an interior point of F when  is 
large enough. If this is the case, we immediately have the following assertions: 1) y= is an 
interior point of F implies that none of the constraints of problem (2) would be active on 
y=, i.e., jx(y=()j)  1 for all i = 1; : : : ; p; 2) () is an interior point of F as well since 
() = PF (y=) = y= by Eq. (6) and the fact y= 2 F. Combining the results in 1) and 
2), it is easy to see that jxTi 
()j  1 for all i = 1; : : : ; p. By (R1), we can conclude that
() = 0, under the assumption that  is large enough. 
The above analysis may naturally lead to a question: does there exist a speci
c param-eter 
value max such that the optimal solution of problem (1) is 0 whenever   max? The 
answer is armative. Indeed, let us de
ne 
max = max 
i 
jxTi 
yj: (7) 
It is well known (Tibshirani et al., 2012) that max de
ned by Eq. (7) is the smallest 
parameter such that problem (1) has a trivial solution, i.e.,
() = 0; 8  2 [max;1): (8) 
Combining the results in (8) and Eq. (3), we immediately have 
() = 
y 
 
; 8  2 [max;1): (9) 
Therefore, through out the rest of this paper, we will focus on the cases with  2 (0; max). 
In the subsequent sections, we will follow (R1') to develop our screening rules. More 
speci
cally, the derivation of the proposed screening rules can be divided into the following 
three steps: 
Step 1. We
rst estimate a region  which contains the dual optimal solution (). 
Step 2. We solve the maximization problem in (R1'), i.e., sup2 jxTi 
j. 
Step 3. By plugging in the upper bound we
nd in Step 2, it is straightforward to develop 
the screening rule based on (R1'). 
The geometric property of the dual problem illustrated by Eq. (6) serves as a fundamentally 
important role in developing our DPP and EDPP screening rules. 
2.2 Fundamental Screening Rules via Dual Polytope Projections (DPP) 
In this Section, we propose the so called DPP screening rules for discarding the inactive 
features for Lasso. As the name indicates, the idea of DPP heavily relies on the properties 
of projection operators, e.g., the nonexpansiveness (Bertsekas, 2003). We will follow the 
three steps stated in Section 2.1 to develop the DPP screening rules. 
First, we need to
nd a region  which contains the dual optimal solution (). Indeed, 
the result in (9) provides us an important clue. That is, we may be able to estimate a possible 
region for () in terms of a known (0) with   0. Notice that, we can always set 
5
Wang, Wonka and Ye 
0 = max and make use of the fact that (max) = y=max implied by (9). Another 
key ingredient comes from Eq. (6), i.e., the dual optimal solution () is the projection 
of y= onto the feasible set F, which is nonempty closed and convex. A nice property of 
the projection operators de
ned in a Hilbert space with respect to a nonempty closed and 
convex set is the so called nonexpansiveness. For convenience, we restate the de
nition of 
nonexpansiveness in the following theorem. 
Theorem 1 Let C be a nonempty closed convex subset of a Hilbert space H. Then the 
projection operator de

More Related Content

PDF
A study on rough set theory based
PDF
Optimal rule set generation using pso algorithm
PDF
A Non-Revisiting Genetic Algorithm for Optimizing Numeric Multi-Dimensional F...
PPT
Prediction Of Bioactivity From Chemical Structure
PPTX
SF Scala meet up, lighting talk: SPA -- Scala JDBC wrapper
PDF
Frequentist inference only seems easy By John Mount
PDF
Alpine ML Talk: Vtreat: A Package for Automating Variable Treatment in R By ...
PPTX
SF Big Analytics: Machine Learning with Presto by Christopher Berner
A study on rough set theory based
Optimal rule set generation using pso algorithm
A Non-Revisiting Genetic Algorithm for Optimizing Numeric Multi-Dimensional F...
Prediction Of Bioactivity From Chemical Structure
SF Scala meet up, lighting talk: SPA -- Scala JDBC wrapper
Frequentist inference only seems easy By John Mount
Alpine ML Talk: Vtreat: A Package for Automating Variable Treatment in R By ...
SF Big Analytics: Machine Learning with Presto by Christopher Berner

Viewers also liked (10)

PDF
SF Big Analytics & SF Machine Learning Meetup: Machine Learning at the Limit ...
PDF
presentation
PDF
GA.-.Presentation
PDF
Healthcare Data Analytics with Extreme Tree Models
PPTX
Justin Basilico, Research/ Engineering Manager at Netflix at MLconf SF - 11/1...
PDF
20160908 hivemall meetup
PPTX
Sloan MBA Application Optional
PDF
Feature Importance Analysis with XGBoost in Tax audit
PDF
Ensembling & Boosting 概念介紹
PDF
XGBoost: the algorithm that wins every competition
SF Big Analytics & SF Machine Learning Meetup: Machine Learning at the Limit ...
presentation
GA.-.Presentation
Healthcare Data Analytics with Extreme Tree Models
Justin Basilico, Research/ Engineering Manager at Netflix at MLconf SF - 11/1...
20160908 hivemall meetup
Sloan MBA Application Optional
Feature Importance Analysis with XGBoost in Tax audit
Ensembling & Boosting 概念介紹
XGBoost: the algorithm that wins every competition
Ad

Similar to Lasso Screening Rules via Dual Polytope Projection (19)

PDF
Unsupervised Feature Selection Based on the Distribution of Features Attribut...
PDF
Sparse Observability using LP Presolve and LTDL Factorization in IMPL (IMPL-S...
PDF
Array diagnosis using compressed sensing in near field
DOC
FOCUS.doc
DOC
FOCUS.doc
PPTX
Issues in DTL.pptx
PDF
Local vs. Global Models for Effort Estimation and Defect Prediction
PDF
A STRATEGIC HYBRID TECHNIQUE TO DEVELOP A GAME PLAYER
PDF
Polikar10missing
DOC
ASCE_ChingHuei_Rev00..
DOC
ASCE_ChingHuei_Rev00..
PDF
CVPR2010: Sparse Coding and Dictionary Learning for Image Analysis: Part 1: S...
PDF
Tuto cvpr part1
PDF
Evaluation of a Multiple Regression Model for Noisy and Missing Data
PDF
Restricted Sequential Floating Search Applied to Object Selection 1st Edition...
PDF
Effective Feature Selection for Feature Possessing Group Structure
PPT
Symmetrical2
PDF
NIPS2009: Sparse Methods for Machine Learning: Theory and Algorithms
PDF
Regularized Compression of A Noisy Blurred Image
Unsupervised Feature Selection Based on the Distribution of Features Attribut...
Sparse Observability using LP Presolve and LTDL Factorization in IMPL (IMPL-S...
Array diagnosis using compressed sensing in near field
FOCUS.doc
FOCUS.doc
Issues in DTL.pptx
Local vs. Global Models for Effort Estimation and Defect Prediction
A STRATEGIC HYBRID TECHNIQUE TO DEVELOP A GAME PLAYER
Polikar10missing
ASCE_ChingHuei_Rev00..
ASCE_ChingHuei_Rev00..
CVPR2010: Sparse Coding and Dictionary Learning for Image Analysis: Part 1: S...
Tuto cvpr part1
Evaluation of a Multiple Regression Model for Noisy and Missing Data
Restricted Sequential Floating Search Applied to Object Selection 1st Edition...
Effective Feature Selection for Feature Possessing Group Structure
Symmetrical2
NIPS2009: Sparse Methods for Machine Learning: Theory and Algorithms
Regularized Compression of A Noisy Blurred Image
Ad

More from Chester Chen (20)

PDF
SFBigAnalytics_SparkRapid_20220622.pdf
PDF
zookeeer+raft-2.pdf
PPTX
SF Big Analytics 2022-03-15: Persia: Scaling DL Based Recommenders up to 100 ...
PDF
SF Big Analytics talk: NVIDIA FLARE: Federated Learning Application Runtime E...
PDF
A missing link in the ML infrastructure stack?
PDF
Shopify datadiscoverysf bigdata
PDF
SF Big Analytics 20191112: How to performance-tune Spark applications in larg...
PDF
SF Big Analytics 2019112: Uncovering performance regressions in the TCP SACK...
PDF
SFBigAnalytics_20190724: Monitor kafka like a Pro
PDF
SF Big Analytics 2019-06-12: Managing uber's data workflows at scale
PPTX
SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...
PPTX
SF Big Analytics_20190612: Scaling Apache Spark on Kubernetes at Lyft
PDF
SFBigAnalytics- hybrid data management using cdap
PDF
Sf big analytics: bighead
PPTX
Sf big analytics_2018_04_18: Evolution of the GoPro's data platform
PPTX
Analytics Metrics delivery and ML Feature visualization: Evolution of Data Pl...
PPTX
2018 data warehouse features in spark
PDF
2018 02-08-what's-new-in-apache-spark-2.3
PPTX
2018 02 20-jeg_index
PDF
Index conf sparkml-feb20-n-pentreath
SFBigAnalytics_SparkRapid_20220622.pdf
zookeeer+raft-2.pdf
SF Big Analytics 2022-03-15: Persia: Scaling DL Based Recommenders up to 100 ...
SF Big Analytics talk: NVIDIA FLARE: Federated Learning Application Runtime E...
A missing link in the ML infrastructure stack?
Shopify datadiscoverysf bigdata
SF Big Analytics 20191112: How to performance-tune Spark applications in larg...
SF Big Analytics 2019112: Uncovering performance regressions in the TCP SACK...
SFBigAnalytics_20190724: Monitor kafka like a Pro
SF Big Analytics 2019-06-12: Managing uber's data workflows at scale
SF Big Analytics 20190612: Building highly efficient data lakes using Apache ...
SF Big Analytics_20190612: Scaling Apache Spark on Kubernetes at Lyft
SFBigAnalytics- hybrid data management using cdap
Sf big analytics: bighead
Sf big analytics_2018_04_18: Evolution of the GoPro's data platform
Analytics Metrics delivery and ML Feature visualization: Evolution of Data Pl...
2018 data warehouse features in spark
2018 02-08-what's-new-in-apache-spark-2.3
2018 02 20-jeg_index
Index conf sparkml-feb20-n-pentreath

Recently uploaded (20)

PDF
Data Engineering Interview Questions & Answers Cloud Data Stacks (AWS, Azure,...
PPTX
QUANTUM_COMPUTING_AND_ITS_POTENTIAL_APPLICATIONS[2].pptx
PPTX
Managing Community Partner Relationships
PPTX
Introduction to Inferential Statistics.pptx
DOCX
Factor Analysis Word Document Presentation
PPTX
Lesson-01intheselfoflifeofthekennyrogersoftheunderstandoftheunderstanded
PPT
Image processing and pattern recognition 2.ppt
PPTX
SET 1 Compulsory MNH machine learning intro
PDF
Votre score augmente si vous choisissez une catégorie et que vous rédigez une...
PDF
Jean-Georges Perrin - Spark in Action, Second Edition (2020, Manning Publicat...
PPTX
CYBER SECURITY the Next Warefare Tactics
PPTX
Phase1_final PPTuwhefoegfohwfoiehfoegg.pptx
PPTX
FMIS 108 and AISlaudon_mis17_ppt_ch11.pptx
PDF
Microsoft Core Cloud Services powerpoint
PPTX
Pilar Kemerdekaan dan Identi Bangsa.pptx
PDF
REAL ILLUMINATI AGENT IN KAMPALA UGANDA CALL ON+256765750853/0705037305
PDF
Capcut Pro Crack For PC Latest Version {Fully Unlocked 2025}
PDF
Introduction to the R Programming Language
PPTX
A Complete Guide to Streamlining Business Processes
PDF
Tetra Pak Index 2023 - The future of health and nutrition - Full report.pdf
Data Engineering Interview Questions & Answers Cloud Data Stacks (AWS, Azure,...
QUANTUM_COMPUTING_AND_ITS_POTENTIAL_APPLICATIONS[2].pptx
Managing Community Partner Relationships
Introduction to Inferential Statistics.pptx
Factor Analysis Word Document Presentation
Lesson-01intheselfoflifeofthekennyrogersoftheunderstandoftheunderstanded
Image processing and pattern recognition 2.ppt
SET 1 Compulsory MNH machine learning intro
Votre score augmente si vous choisissez une catégorie et que vous rédigez une...
Jean-Georges Perrin - Spark in Action, Second Edition (2020, Manning Publicat...
CYBER SECURITY the Next Warefare Tactics
Phase1_final PPTuwhefoegfohwfoiehfoegg.pptx
FMIS 108 and AISlaudon_mis17_ppt_ch11.pptx
Microsoft Core Cloud Services powerpoint
Pilar Kemerdekaan dan Identi Bangsa.pptx
REAL ILLUMINATI AGENT IN KAMPALA UGANDA CALL ON+256765750853/0705037305
Capcut Pro Crack For PC Latest Version {Fully Unlocked 2025}
Introduction to the R Programming Language
A Complete Guide to Streamlining Business Processes
Tetra Pak Index 2023 - The future of health and nutrition - Full report.pdf

Lasso Screening Rules via Dual Polytope Projection

  • 1. Journal of Machine Learning Research 1 (2000) x-xx Submitted 4/00; Published 10/00 Lasso Screening Rules via Dual Polytope Projection Jie Wang jie.wang.ustc@asu.edu Peter Wonka Peter.Wonka@asu.edu Jieping Ye jieping.ye@asu.edu Department of Computer Science and Engineering Arizona State University Tempe, AZ 85287-8809, USA Editor: Abstract Lasso is a widely used regression technique to
  • 2. nd sparse representations. When the di-mension of the feature space and the number of samples are extremely large, solving the Lasso problem remains challenging. To improve the eciency of solving large-scale Las-so problems, El Ghaoui and his colleagues have proposed the SAFE rules which are able to quickly identify the inactive predictors, i.e., predictors that have 0 components in the solution vector. Then, the inactive predictors or features can be removed from the op-timization problem to reduce its scale. By transforming the standard Lasso to its dual form, it can be shown that the inactive predictors include the set of inactive constraints on the optimal dual solution. In this paper, we propose an ecient and eective screening rule via Dual Polytope Projections (DPP), which is mainly based on the uniqueness and nonexpansiveness of the optimal dual solution due to the fact that the feasible set in the dual space is a convex and closed polytope. Moreover, we show that our screening rule can be extended to identify inactive groups in group Lasso. To the best of our knowledge, there is currently no exact screening rule for group Lasso. We have evaluated our screening rule using synthetic and real data sets. Results show that our rule is more eective in identifying inactive predictors than existing state-of-the-art screening rules for Lasso. Keywords: Lasso, Safe Screening, Sparse Regularization, Polytope Projection, Dual Formulation 1. Introduction Data with various structures and scales comes from almost every aspect of daily life. To eectively extract patterns in the data and build interpretable models with high prediction accuracy is always desirable. One popular technique to identify important explanatory features is by sparse regularization. For instance, consider the widely used `1-regularized least squares regression problem known as Lasso (Tibshirani, 1996). The most appealing property of Lasso is the sparsity of the solutions, which is equivalent to feature selection. Suppose we have N observations and p features. Let y denote the N dimensional response vector and X = [x1; x2; : : : ; xp] be the N p feature matrix. Let 0 be the regularization parameter. The Lasso problem is formulated as the following optimization problem: inf
  • 3. 2Rp 1 2 ky X
  • 4. k2 2 + k
  • 5. k1: (1) c 2000 Jie Wang, Peter Wonka and Jieping Ye.
  • 6. Wang, Wonka and Ye Lasso has achieved great success in a wide range of applications (Chen et al., 2001; Candes, 2006; Zhao and Yu, 2006; Bruckstein et al., 2009; Wright et al., 2010) and in recent years many algorithms have been developed to eciently solve the Lasso problem (Efron et al., 2004; Kim et al., 2007; Park and Hastie, 2007; Donoho and Tsaig, 2008; Friedman et al., 2007; Becker et al., 2010; Friedman et al., 2010). However, when the dimension of feature space and the number of samples are very large, solving the Lasso problem remains chal-lenging because we may not even be able to load the data matrix into main memory. The idea of screening has been shown very promising in solving Lasso for large-scale problems. Essentially, screening aims to quickly identify the inactive features that have 0 components in the solution and then remove them from the optimization. Therefore, we can work on a reduced feature matrix to solve the Lasso problem, which may lead to substantial savings in computational cost and memory usage. Existing screening methods for Lasso can be roughly divided into two categories: the Heuristic Screening Methods and the Safe Screening Methods. As the name indicated, the heuristic screening methods can not guarantee that the discarded features have zero coecients in the solution vector. In other words, they may mistakenly discard the active features which have nonzero coecients in the sparse representations. Well-known heuristic screening methods for Lasso include SIS (Fan and Lv, 2008) and strong rules (Tibshirani et al., 2012). SIS is based on the associations between features and the prediction task, but not from an optimization point of view. Strong rules rely on the assumption that the absolute values of the inner products between features and the residue are nonexpansive (Bauschke and Combettes, 2011) with respect to the parameter values. Notice that, in real applications, this assumption is not always true. In order to ensure the correctness of the solutions, strong rules check the KKT conditions for violations. In case of violations, they weaken the screened set and repeat this process. In contrast to the heuristic screening methods, the safe screening methods for Lasso can guarantee that the discarded features are absent from the resulting sparse models. Existing safe screening methods for Lasso includes SAFE (El Ghaoui et al., 2012) and DOME (Xiang et al., 2011; Xiang and Ramadge, 2012), which are based on an estimation of the dual optimal solution. The key challenge of searching for eective safe screening rules is how to accurately estimate the dual optimal solution. The more accurate the estimation is, the more eective the resulting screening rule is in discarding the inactive features. Moreover, Xiang et al. (2011) have shown that the SAFE rule for Lasso can be read as a special case of their testing rules. In this paper, we develop novel ecient and eective screening rules for the Lasso prob-lem; our screening rules are safe in the sense that no active features will be discarded. As the name indicated (DPP), the proposed approaches heavily rely on the geometric proper-ties of the Lasso problem. Indeed, the dual problem of problem (1) can be formulated as a projection problem. More speci
  • 7. cally, the dual optimal solution of the Lasso problem is the projection of the scaled response vector onto a nonempty closed and convex polytope (the feasible set of the dual problem). This nice property provides us many elegant ap-proaches to accurately estimate the dual optimal solutions, e.g., nonexpansiveness,
  • 8. rmly nonexpansiveness (Bauschke and Combettes, 2011). In fact, the estimation of the dual optimal solution in DPP is a direct application of the nonexpansiveness of the projection operators. Moreover, by further exploiting the properties of the projection operators, we can signi
  • 9. cantly improve the estimation of the dual optimal solution. Based on this esti- 2
  • 10. Lasso Screening Rules via Dual Polytope Projection mation, we develop the so called enhanced DPP (EDPP) rules which are able to detect far more inactive features than DPP. Therefore, the speedup gained by EDPP is much higher than the one by DPP. In real applications, the optimal parameter value of is generally unknown and needs to be estimated. To determine an appropriate value of , commonly used approaches such as cross validation and stability selection involve solving the Lasso problems over a grid of tuning parameters 1 2 : : : K. Thus, the process can be very time consuming. To address this challenge, we develop the sequential version of the DPP families. Brie y speaking, for the Lasso problem, suppose we are given the solution
  • 11. (k1) at k1. We then apply the screening rules to identify the inactive features of problem (1) at k by making use of
  • 12. (k1). The idea of the sequential screening rules is proposed by El Ghaoui et al. (2012) and Tibshirani et al. (2012) and has been shown to be very eective for the aforementioned scenario. In Tibshirani et al. (2012), the authors demonstrate that the sequential strong rules are very eective in discarding inactive features especially for very small parameter values and achieve the state-of-the-art performance. However, in contrast to the recursive SAFE (the sequential version of SAFE rules) and the sequential version of DPP rules, it is worthwhile to mention that the sequential strong rules may mistakenly discard active features because they are heuristic methods. Moreover, it is worthwhile to mention that, for the existing screening rules including SAFE and strong rules, the basic versions are usually special cases of their sequential versions, and the same applies to our DPP and EDPP rules. For the DOME rule (Xiang et al., 2011; Xiang and Ramadge, 2012), it is unclear whether its sequential version exists. The rest of this paper is organized as follows. We present the family of DPP screening rules, i.e., DPP and EDPP, in detail for the Lasso problem in Section 2. Section 3 extends the idea of DPP screening rules to identify inactive groups in group Lasso (Yuan and Lin, 2006). We evaluate our screening rules on synthetic and real data sets from many dierent applications. In Section 4, the experimental results demonstrate that our rules are more eective in discarding inactive features than existing state-of-the-art screening rules. We show that the eciency of the solver can be improved by several orders of magnitude with the enhanced DPP rules, especially for the high-dimensional data sets (notice that, the screening methods can be integrated with any existing solvers for the Lasso problem). Some concluding remarks are given in Section 5. 2. Screening Rules for Lasso via Dual Polytope Projections In this section, we present the details of the proposed DPP and EDPP screening rules for the Lasso problem. We
  • 13. rst review some basics of the dual problem of Lasso including its geometric properties in Section 2.1; we also brie y discuss some basic guidelines for devel-oping safe screening rules for Lasso. Based on the geometric properties discussed in Section 2.1, we then develop the basic DPP screening rule in Section 2.2. As a straightforward ex-tension in dealing with the model selection problems, we also develop the sequential version of DPP rules. In Section 2.3, by exploiting more geometric properties of the dual problem of Lasso, we further improve the DPP rules by developing the so called enhanced DPP (EDPP) rules. The EDPP screening rules signi
  • 14. cantly outperform DPP rules in identifying the inactive features for the Lasso problem. 3
  • 15. Wang, Wonka and Ye 2.1 Basics Dierent from Xiang et al. (2011); Xiang and Ramadge (2012), we do not assume y and all xi have unit length.The dual problem of problem (1) takes the form of (to make the paper self-contained, we provide the detailed derivation of the dual form in the appendix): sup 1 2 kyk22 2 2 y 2 2 : jxTi j 1; i = 1; 2; : : : ; p ; (2) where is the dual variable. For notational convenience, let the optimal solution of problem (2) be () [recall that the optimal solution of problem (1) with parameter is denoted by
  • 16. ()]. Then, the KKT conditions are given by: y = X
  • 17. () + (); (3) xTi () 2 ( sign([
  • 19. ()]i6= 0; [1; 1]; if [
  • 20. ()]i = 0; i = 1; : : : ; p; (4) where []k denotes the kth component. In view of the KKT condition in (4), we have jxTi (())T j 1 ) [
  • 21. ()]i = 0 ) xi is an inactive feature. (R1) In other words, we can potentially make use of (R1) to identify the inactive features for the Lasso problem. However, since () is generally unknown, we can not directly apply (R1) to identify the inactive features. Inspired by the SAFE rules (El Ghaoui et al., 2012), we can
  • 22. rst estimate a region which contains (00). Then, (R1) can be relaxed as follows: sup 2 jxTi j 1 ) [
  • 23. ()]i = 0 ) xi is an inactive feature. (R1') Clearly, as long as we can
  • 24. nd a region which contains (), (R1') will lead to a screening rule to detect the inactive features for the Lasso problem. Moreover, in view of (R1) and (R1'), we can see that the smaller the region is, the more accurate the estimation of () is. As a result, more inactive features can be identi
  • 25. ed by the resulting screening rules. Geometric Interpretations of the Dual Problem By a closer look at the dual problem (2), we can observe that the dual optimal solution is the feasible point which is closest to y=. For notational convenience, let the feasible set of problem (2) be F. Clearly, F is the intersection of 2p closed half-spaces, and thus a closed and convex polytope. (Notice that, F is also nonempty since 0 2 F.) In other words, () is the projection of y= onto the polytope F. Mathematically, for an arbitrary vector w and a convex set C in a Hilbert space H, let us de
  • 26. ne the projection operator as PC(w) = argmin u2C ku wk2: (5) Then, the dual optimal solution () can be expressed by () = PF (y=) = argmin 2F y 2 : (6) 4
  • 27. Lasso Screening Rules via Dual Polytope Projection Indeed, the nice property of problem (2) illustrated by Eq. (6) leads to many interesting Ti results. For example, it is easy to see that y= would be an interior point of F when is large enough. If this is the case, we immediately have the following assertions: 1) y= is an interior point of F implies that none of the constraints of problem (2) would be active on y=, i.e., jx(y=()j) 1 for all i = 1; : : : ; p; 2) () is an interior point of F as well since () = PF (y=) = y= by Eq. (6) and the fact y= 2 F. Combining the results in 1) and 2), it is easy to see that jxTi ()j 1 for all i = 1; : : : ; p. By (R1), we can conclude that
  • 28. () = 0, under the assumption that is large enough. The above analysis may naturally lead to a question: does there exist a speci
  • 29. c param-eter value max such that the optimal solution of problem (1) is 0 whenever max? The answer is armative. Indeed, let us de
  • 30. ne max = max i jxTi yj: (7) It is well known (Tibshirani et al., 2012) that max de
  • 31. ned by Eq. (7) is the smallest parameter such that problem (1) has a trivial solution, i.e.,
  • 32. () = 0; 8 2 [max;1): (8) Combining the results in (8) and Eq. (3), we immediately have () = y ; 8 2 [max;1): (9) Therefore, through out the rest of this paper, we will focus on the cases with 2 (0; max). In the subsequent sections, we will follow (R1') to develop our screening rules. More speci
  • 33. cally, the derivation of the proposed screening rules can be divided into the following three steps: Step 1. We
  • 34. rst estimate a region which contains the dual optimal solution (). Step 2. We solve the maximization problem in (R1'), i.e., sup2 jxTi j. Step 3. By plugging in the upper bound we
  • 35. nd in Step 2, it is straightforward to develop the screening rule based on (R1'). The geometric property of the dual problem illustrated by Eq. (6) serves as a fundamentally important role in developing our DPP and EDPP screening rules. 2.2 Fundamental Screening Rules via Dual Polytope Projections (DPP) In this Section, we propose the so called DPP screening rules for discarding the inactive features for Lasso. As the name indicates, the idea of DPP heavily relies on the properties of projection operators, e.g., the nonexpansiveness (Bertsekas, 2003). We will follow the three steps stated in Section 2.1 to develop the DPP screening rules. First, we need to
  • 36. nd a region which contains the dual optimal solution (). Indeed, the result in (9) provides us an important clue. That is, we may be able to estimate a possible region for () in terms of a known (0) with 0. Notice that, we can always set 5
  • 37. Wang, Wonka and Ye 0 = max and make use of the fact that (max) = y=max implied by (9). Another key ingredient comes from Eq. (6), i.e., the dual optimal solution () is the projection of y= onto the feasible set F, which is nonempty closed and convex. A nice property of the projection operators de
  • 38. ned in a Hilbert space with respect to a nonempty closed and convex set is the so called nonexpansiveness. For convenience, we restate the de
  • 39. nition of nonexpansiveness in the following theorem. Theorem 1 Let C be a nonempty closed convex subset of a Hilbert space H. Then the projection operator de
  • 40. ned in Eq. (5) is continuous and nonexpansive, i.e., kPC(w2) PC(w1)k2 kw2 w1k2; 8w2;w1 2 H: (10) In view of Eq. (6), a direct application of Theorem 1 leads to the following result: Theorem 2 For the Lasso problem, let ; 0 0 be two regularization parameters. Then, k() (0)k2
  • 44. 1 1 0
  • 48. kyk2: (11) For notational convenience, let a ball centered at c with radius be denoted by B(c; ). Theorem 2 actually implies that the dual optimal solution must be inside a ball centered at (0) with radius j1= 1=0j kyk2, i.e., () 2 B (0);
  • 52. 1 1 0
  • 56. kyk2 : (12) We thus complete the
  • 57. rst step for developing DPP. Because it is easy to
  • 58. nd the upper bound of a linear functional over a ball, we combine the remaining two steps as follows. Theorem 3 For the Lasso problem, assume we are given the solution of its dual problem (0) for a speci
  • 59. c 0. Let be a positive value dierent from 0. Then [
  • 60. ()]i = 0 if
  • 62. xTi
  • 64. () 1 kxik2kyk2
  • 68. 1 1 0
  • 72. : (13) Ti Proof The dual optimal solution () is estimated to be inside the ball given by Eq. (12). To simplify notations, let c = (0) and = j1= 1=0j kyk2. To develop a screening rule based on (R1'), we need to solve the optimization problem: sup2B(c;) jxj. Indeed, for any 2 B(c; ), it can be expressed by: = (0) + v; kvk2 : Therefore, the optimization problem can be easily solved as follows: sup 2B(c;)
  • 74. xTi
  • 80. =
  • 84. + kxik2: (14) By plugging the upper bound in Eq. (14) to (R1'), we obtain the statement in Theorem 3, which completes the proof. 6
  • 85. Lasso Screening Rules via Dual Polytope Projection Theorem 3 implies that we can develop applicable screening rules for Lasso as long as the dual optimal solution () is known for a certain parameter value 0. By simply setting 0 = max and noting that (max) = y=max [please refer to Eq. (9)], Theorem 3 immediately leads to the following result. Corollary 4 Basic DPP: For the Lasso problem (1), let max = maxi jxTi yj. If max, then [
  • 86. ]i = 0; 8i 2 I. Otherwise, [
  • 87. ()]i = 0 if
  • 95. 1 1 1 max kxik2kyk2: Remark 5 Notice that, DPP is not the same as ST1 Xiang et al. (2011) and SAFE El Ghaoui et al. (2012), which discards the ith feature if jxTi yj kxik2kyk2 max max : (15) From the perspective of the sphere test, the radius of ST1/SAFE and DPP are the same. But the centers of ST1 and DPP are y= and y=max respectively, which leads to dierent formulas, i.e., Eq. (15) and Corollary 4. In real applications, the optimal parameter value of is generally unknown and needs to be estimated. To determine an appropriate value of , commonly used approaches such as cross validation and stability selection involve solving the Lasso problem over a grid of tuning parameters 1 2 : : : K, which is very time consuming. Motivated by the ideas of Tibshirani et al. (2012) and El Ghaoui et al. (2012), we develop a sequential version of DPP rules. We
  • 96. rst apply the DPP screening rule in Corollary 4 to discard inactive features for the Lasso problem (1) with parameter being 1. After solving the reduced optimization problem for 1, we obtain the exact solution
  • 97. (1). Hence by Eq. (3), we can
  • 98. nd (1). According to Theorem 3, once we know the optimal dual solution (1), we can construct a new screening rule by setting 0 = 1 to identify inactive features for problem (1) with parameter being 2. By repeating the above process, we obtain the sequential version of the DPP rule as in the following corollary. Corollary 6 Sequential DPP: For the Lasso problem (1), suppose we are given a se- quence of parameter values max = 0 1 : : : m. Then for any integer 0 k m, we have [
  • 100. (k) is known and the following holds:
  • 104. xTi y X
  • 105. (k) k
  • 109. 1 1 k+1 1 k kxik2kyk2: Remark 7 From Corollaries 4 and 6, we can see that both of the DPP and sequential DPP rules discard the inactive features for the Lasso problem with a smaller parameter value by assuming a known dual optimal solution at a larger parameter value. This is in fact a standard way to construct screening rules for Lasso (Tibshirani et al., 2012; El Ghaoui et al., 2012; Xiang et al., 2011; Xiang and Ramadge, 2012). 7
  • 110. Wang, Wonka and Ye Remark 8 For illustration purpose, we present both the basic and sequential version of the DPP screening rules. However, it is easy to see that the basic DPP rule can be easily derived from its sequential version by simply setting k = max and k+1 = . Therefore, in this paper, we will focus on the development and evaluation of the sequential version of the proposed screening rules. To avoid any confusions, DPP and EDPP all refer to the corresponding sequential versions. 2.3 Enhanced DPP Rules for Lasso In this section, we further improve the DPP rules presented in Section 2.2 by a more careful analysis of the projection operators. Indeed, from the three steps by which we develop the DPP rules, we can see that the
  • 111. rst step is a key. In other words, the estimation of the dual optimal solution serves as a fundamentally important role in developing the DPP rules. Moreover, (R1') implies that the more accurate the estimation is, the more eective the resulting screening rule is in discarding the inactive features. The estimation of the dual optimal solution in DPP rules is in fact a direct consequence of the nonexpansiveness of the projection operators. Therefore, in order to improve the performance of the DPP rules in discarding the inactive features, we propose two dierent approaches to
  • 112. nd more accurate estimations of the dual optimal solution. These two approaches are presented in detail in Sections 2.3.1 and 2.3.2 respectively. By combining the ideas of these two approaches, we can further improve the estimation of the dual optimal solution. Based on this estimation, we develop the enhanced DPP rules (EDPP) in Section 2.3.3. Again, we will follow the three steps in Section 2.1 to develop the proposed screening rules. 2.3.1 Improving the DPP rules via Projections of Rays In the DPP screening rules, the dual optimal solution () is estimated to be inside the ball B ((0); j1= 1=0jkyk2) with (0) given. In this section, we show that () lies inside a ball centered at (0) with a smaller radius. Indeed, it is well known that the projection of an arbitrary point onto a nonempty closed convex set C in a Hilbert space H always exists and is unique (Bauschke and Combettes, 2011). However, the converse is not true, i.e., there may exist w1;w2 2 H such that w16= w2 and PC(w1) = PC(w2). In fact, it is known that the following result holds: Lemma 9 (Bauschke and Combettes, 2011) Let C be a nonempty closed convex subset of a Hilbert space H. For a point w 2 H, let w(t) = PC(w) + t(w PC(w)). Then, the projection of the point w(t) is PC(w) for all t 0, i.e., PC(w(t)) = PC(w); 8t 0: (16) Clearly, when w6= PC(w), i.e., w =2 C, w(t) with t 0 is the ray starting from PC(w) and pointing in the same direction as wPC(w). By Lemma 9, we know that the projection of the ray w(t) with t 0 onto the set C is a single point PC(w). [When w = PC(w), i.e., w 2 C, w(t) with t 0 becomes a single point and the statement in Lemma 9 is trivial.] By making use of Lemma 9 and the nonexpansiveness of the projection operators, we can improve the estimation of the dual optimal solution in DPP [please refer to Theorem 2 and Eq. (12)]. More speci
  • 113. cally, we have the following result: 8
  • 114. Lasso Screening Rules via Dual Polytope Projection Theorem 10 For the Lasso problem, suppose the dual optimal solution () at 0 2 (0; max] is known. For any 2 (0; 0], let us de
  • 115. ne v1(0) = ( y 0 (0); if 0 2 (0; max); sign(xT y)x; if 0 = max; where x = argmaxxi jxTi yj; (17) v2(; 0) = y (0); (18) v? 2 (; 0) = v2(; 0) hv1(0); v2(; 0)i kv1(0)k22 v1(0): (19) Then, the dual optimal solution () can be estimated as follows: () 2 B (0); kv? 2 (; 0)k2 B (0);
  • 119. 1 1 0
  • 123. kyk2 : (20) Proof By making use of Lemma 9, we present the proof of the statement for the cases with 0 2 (0; max). We postpone the proof of the statement for the case with 0 = max after we introduce more general technical results. In view of the assumption 0 2 (0; max), it is easy to see that y 0 =2 F ) y 0 6= PF y 0 = (0) ) y 0 (0)6= 0: (21) For each 0 2 (0; max), let us de
  • 124. ne 0(t) = (0) + tv1(0) = (0) + t y 0 ; t 0: (22) (0) By the result in (21), we can see that 0() de
  • 125. ned by Eq. (22) is a ray which starts at (0) and points in the same direction as y=0 (0). In view of Eq. (6), a direct application of Lemma 9 leads to that: PF (0(t)) = (0); 8 t 0: (23) By applying Theorem 1 again, we have k() (0)k2 = PF y PF (0(t)) 2 (24) y 0(t) 2 = t y 0 (0) y (0) 2 = ktv1(0) v2(; 0)k2; 8 t 0: Because the inequality in (24) holds for all t 0, it is easy to see that k() (0)k2 min t0 ktv1(0) v2(; 0)k2 (25) = ( kv2(; 0)k2; if hv1(0); v2(; 0)i 0; v? 2 (; 0) 2 ; otherwise: 9
  • 126. Wang, Wonka and Ye The inequality in (25) implies that, to prove the
  • 127. rst half of the statement, i.e., () 2 B((0); kv? 2 (; 0)k2), we only need to show that hv1(0); v2(; 0)i 0. Indeed, it is easy to see that 0 2 F. Therefore, in view of Eq. (23), the distance between 0(t) and (0) must be shorter than the one between 0(t) and 0 for all t 0, i.e., k0(t) (0)k22 k0(t) 0k22 (26) ) 0 k(0)k22 + 2t (0); y 0 k(0)k22 ; 8 t 0: Since the inequality in (26) holds for all t 0, we can conclude that: (0); y 0 k(0)k22 0 ) kyk2 0 k(0)k2: (27) Therefore, we can see that: hv1(0); v2(; 0)i = y 0 (0); y y 0 + y 0 (0) (28) 1 1 0 y 0 (0); y = 1 1 0 kyk22 0 h(0); yi 1 1 0 kyk22 0 k(0)k2kyk2 0: The last inequality in (28) is due to the result in (27). Clearly, in view of (25) and (28), we can see that the
  • 128. rst half of the statemen-t holds, i.e., () 2 B((0); kv? 2 (; 0)k2). The second half of the statement, i.e., 2 (; 0)k2) B((0); j1= 1=0jkyk2), can be easily obtained by noting B((0); kv? that the inequality in (24) reduces to the one in (12) when t = 1. This completes the proof of the statement with 0 2 (0; max). Before we present the proof of Theorem 10 for the case with 0 = max, let us brie y review some technical results from convex analysis
  • 130. nition 11 Ruszczynski (2006) Let C be a nonempty closed convex subset of a Hilbert space H and w 2 C. The set NC(w) := fv : hv; u wi 0; 8u 2 Cg (29) is called the normal cone to C at w. In terms of the normal cones, the following theorem provides an elegant and useful characterization of the projections onto nonempty closed convex subsets of a Hilbert space. 10
  • 131. Lasso Screening Rules via Dual Polytope Projection Theorem 12 (Bauschke and Combettes, 2011) Let C be a nonempty closed convex subset of a Hilbert space H. Then, for every w 2 H and w0 2 C, w0 is the projection of w onto C if and only if w w0 2 NC(w0), i.e., w0 = PC(w) , hw w0; u w0i 0; 8u 2 C: (30) In view of the proof of Theorem 10, we can see that Eq. (23) is a key step. When 0 = max, similar to Eq. (22), let us de
  • 132. ne max(t) = (max) + tv1(max); 8 t 0: (31) By Theorem 12, the following lemma shows that Eq. (23) also holds for 0 = max. Lemma 13 For the Lasso problem, let v1() and max() be given by Eq. (17) and Eq. (31), then the following result holds: PF (max(t)) = (max); 8 t 0: (32) Proof To prove the statement, Theorem 12 implies that we only need to show: hv1(max); (max)i 0; 8 2 F: (33) Recall that v1(max) = sign(xT y)x, x = argmaxxi jxTi yj [Eq. (17)], and (max) = y=max [Eq. (9)]. It is easy to see that hv1(max); (max)i = sign(xT y)x; y max = jxT yj max = 1: (34) Moreover, assume is an arbitrary point of F. Then, we have jhx; ij 1, and thus hv1(max); i = hsign(xT y)x; i jhx; ij 1: (35) Therefore, the inequality in (33) easily follows by combing the results in (34) and (35), which completes the proof. We are now ready to give the proof of Theorem 10 for the case with 0 = max. Proof In view of Theorem 1 and Lemma 13, we have k() (max)k2 = PF y PF (max(t)) 2 (36) y max(t) 2 = tv1(max) y (max) 2 = ktv1(max) v2(; max)k2; 8 t 0: Because the inequality in (36) holds for all t 0, we can see that k() (max)k2 min t0 ktv1(max) v2(; max)k2 (37) = ( kv2(; max)k2; if hv1(max); v2(; max)i 0; v? 2 (; max) 2 ; otherwise: 11
  • 133. Wang, Wonka and Ye Clearly, we only need to show that hv1(max); v2(; max)i 0. Indeed, Lemma 13 implies that v1(max) 2 NF ((max)) [please refer to the inequality in (33)]. By noting that 0 2 F, we have v1(max); 0 y max 0 ) hv1(max); yi 0: (38) Moreover, because y=max = (max), it is easy to see that hv1(max); v2(; max)i = v1(max); y y max (39) = 1 1 max hv1(max); yi 0: Therefore, in view of (37) and (39), we can see that the
  • 134. rst half of the statement holds, i.e., () 2 B((max); kv? 2 (; max)k2). The second half of the statement, i.e., 2 (; max)k2) B((max); j1=1=maxjkyk2), can be easily obtained by B((max); kv? noting that the inequality in (37) reduces to the one in (12) when t = 0. This completes the proof of the statement with 0 = max. Thus, the proof of Theorem 10 is completed. Theorem 10 in fact provides a more accurate estimation of the dual optimal solution than the one in DPP, i.e., () lies inside a ball centered at (0) with a radius kv? 2 (; 0)k2. Based on this improved estimation and (R1'), we can develop the following screening rule to discard the inactive features for Lasso. Theorem 14 For the Lasso problem, assume the dual optimal solution () at 0 2 (0; max] is known. Then, for each 2 (0; 0), we have [
  • 135. ()]i = 0 if jxTi (0)j 1 kv? 2 (; 0)k2kxik2: We omit the proof of Theorem 14 since it is very similar to the one of Theorem 3. By Theorem 14, we can easily develop the following sequential screening rule. Improvement 1: For the Lasso problem (1), suppose we are given a sequence of pa- rameter values max = 0 1 : : : K. Then for any integer 0 k K, we have [
  • 136. (k+1)]i = 0 if
  • 137. (k) is known and the following holds:
  • 141. xTi y X
  • 142. (k) k
  • 146. 1 kv? 2 (k+1; k)k2kxik2: The screening rule in Improvement 1 is developed based on (R1') and the estimation of the dual optimal solution in Theorem 10, which is more accurate than the one in DPP. Therefore, in view of (R1'), the screening rule in Improvement 1 are more eective in discarding the inactive features than the DPP rule. 12
  • 147. Lasso Screening Rules via Dual Polytope Projection 2.3.2 Improving the DPP rules via Firmly Nonexpansiveness In Section 2.3.1, we improve the estimation of the dual optimal solution in DPP by making use of the projections of properly chosen rays. (R1') implies that the resulting screening rule stated in Improvement 1 is more eective in discarding the inactive features than DPP. In this Section, we present another approach to improve the estimation of the dual optimal solution in DPP by making use of the so called
  • 148. rmly nonexpansiveness of the projections onto nonempty closed convex subset of a Hilbert space. Theorem 15 (Bauschke and Combettes, 2011) Let C be a nonempty closed convex subset of a Hilbert space H. Then the projection operator de
  • 149. ned in Eq. (5) is continuous and
  • 150. rmly nonexpansive. In other words, for any w1;w2 2 H, we have kPC(w1) PC(w2)k22 + k(Id PC)(w1) (Id PC)(w2)k22 kw1 w2k22 ; (40) where Id is the identity operator. In view of the inequalities in (40) and (10), it is easy to see that
  • 151. rmly nonexpansiveness implies nonexpansiveness. But the converse is not true. Therefore,
  • 152. rmly nonexpansiveness of the projection operators is a stronger property than the nonexpansiveness. A direct application of Theorem 15 leads to the following result. Theorem 16 For the Lasso problem, let ; 0 0 be two parameter values. Then () 2 B (0) + 1 2 1 1 0 y; 1 2
  • 156. 1 1 0
  • 160. kyk2 B (0);
  • 164. 1 1 0
  • 168. kyk2 : (41) Proof In view of Eq. (6) and the
  • 169. rmly nonexpansiveness in (40), we have k() (0)k22 + y () y 0 (0) 2 2 y y 0 2 2 (42) , k() (0)k22 () (0); y y 0 , () (0) + 1 2 1 1 0 y 2 1 2
  • 173. 1 1 0
  • 177. kyk2; which completes the proof of the
  • 178. rst half of the statement. The second half of the state-ment is trivial by noting that the
  • 179. rst inequality in (42) (
  • 180. rmly nonexpansiveness) implies the inequality in (11) (nonexpansiveness) but not vice versa. Indeed, it is easy to see that the ball in the middle of (41) is inside the right one and has only a half radius. Clearly, Theorem 16 provides a more accurate estimation of the dual optimal solution than the one in DPP, i.e., the dual optimal solution must be inside a ball which is a subset of the one in DPP and has only a half radius. Again, based on the estimation in Theorem 16 and (R1'), we have the following result. 13
  • 181. Wang, Wonka and Ye Theorem 17 For the Lasso problem, assume the dual optimal solution () at 0 2 (0; max] is known. Then, for each 2 (0; 0), we have [
  • 182. ()]i = 0 if
  • 186. xTi (0) + 1 2 1 1 0 y
  • 190. 1 1 2 1 1 0 kyk2kxik2: We omit the proof of Theorem 17 since it is very similar to the proof of Theorem 3. A direct application of Theorem 17 leads to the following sequential screening rule. Improvement 2: For the Lasso problem (1), suppose we are given a sequence of pa- rameter values max = 0 1 : : : K. Then for any integer 0 k K, we have [
  • 191. (k+1)]i = 0 if
  • 192. (k) is known and the following holds:
  • 196. xTi y X
  • 197. (k) k + 1 2 1 k+1 1 k y
  • 201. 1 1 2 1 k+1 1 k kyk2kxik2: Because the screening rule in Improvement 2 is developed based on (R1') and the esti-mation in Theorem 16, it is easy to see that Improvement 2 is more eective in discarding the inactive features than DPP. 2.3.3 The Proposed Enhanced DPP Rules In Sections 2.3.1 and 2.3.2, we present two dierent approaches to improve the estimation of the dual optimal solution in DPP. In view of (R1'), we can see that the resulting screen-ing rules, i.e., Improvements 1 and 2, are more eective in discarding the inactive features than DPP. In this section, we give a more accurate estimation of the dual optimal solution than the ones in Theorems 10 and 16 by combining the aforementioned two approaches together. The resulting screening rule for Lasso is the so called enhanced DPP rule (EDP-P). Again, (R1') implies that EDPP is more eective in discarding the inactive features than the screening rules in Improvements 1 and 2. We also present several experiments to demonstrate that EDPP is able to identify more inactive features than the screening rules in Improvements 1 and 2. Therefore, in the subsequent sections, we will focus on the generalizations and evaluations of EDPP. To develop the EDPP rules, we still follow the three steps in Section 2.1. Indeed, by combining the two approaches proposed in Sections 2.3.1 and 2.3.2, we can further improve the estimation of the dual optimal solution in the following theorem. Theorem 18 For the Lasso problem, suppose the dual optimal solution () at 0 2 (0; max] is known, and 8 2 (0; 0], let v? 2 (; 0) be given by Eq. (19). Then, we have () (0) + 1 2 v? 2 (; 0) 2 1 2 kv? 2 (; 0)k2: (43) Proof Recall that 0(t) is de
  • 202. ned by Eq. (22) and Eq. (31). In view of (40), we have PF y PF (0(t)) 2 2 + (Id PF ) y (Id PF ) (0(t)) 2 2 y 0(t) 2 2 : (44) 14
  • 203. Lasso Screening Rules via Dual Polytope Projection By expanding the second term on the left hand side of (44) and rearranging the terms, we obtain the following equivalent form: PF y PF (0(t)) 2 2 Dy 0(t); PF y PF (0(t)) E : (45) In view of Eq. (6), Eq. (23) and Eq. (32), the inequality in (45) can be rewritten as k() (0)k22 Dy E 0(t); () (0) (46) = Dy (0) tv1(0); () (0) E = hv2(; 0) tv1(0); () (0)i; 8t 0: [Recall that v1(0) and v2(; 0) are de
  • 204. ned by Eq. (17) and Eq. (18) respectively.] Clearly, the inequality in (46) is equivalent to () (0) + 1 2 (v2(; 0) tv1(0)) 2 2 1 4 kv2(; 0) tv1(0)k22 ; 8t 0: (47) The statement follows easily by minimizing the right hand side of the inequality in (47), which has been done in the proof of Theorem 10. Indeed, Theorem 18 is equivalent to bounding () in a ball as follows: () 2 B (0) + 1 2 v? 2 (; 0); 1 2 kv? 2 (; 0)k2 : (48) Based on this estimation and (R1'), we immediately have the following result. Theorem 19 For the Lasso problem, assume the dual optimal problem () at 0 2 (0; max] is known, and 2 (0; 0]. Then [
  • 205. ()]i = 0 if the following holds:
  • 209. xTi (0) + 1 2 v? 2 (; 0)
  • 213. 1 1 2 kv? 2 (; 0)k2kxik2: We omit the proof of Theorem 19 since it is very similar to the one of Theorem 3. Based on Theorem 19, we can develop the EDPP rules as follows. Corollary 20 EDPP: For the Lasso problem, suppose we are given a sequence of param- eter values max = 0 1 : : : K. Then for any integer 0 k K, we have [
  • 214. (k+1)]i = 0 if
  • 215. (k) is known and the following holds:
  • 219. xTi y X
  • 220. (k) k + 1 2 v? 2 (k+1; k)
  • 224. 1 1 2 kv? 2 (k+1; k)k2kxik2: (49) It is easy to see that the ball in (48) has the smallest radius compared to the ones in Theorems 10 and 16, and thus it provides the most accurate estimation of the dual optimal solution. According to (R1'), EDPP is more eective in discarding the inactive features than DPP, Improvements 1 and 2. 15
  • 225. Wang, Wonka and Ye 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio DPP Improvement1 Improvement2 EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio DPP Improvement1 Improvement2 EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio DPP Improvement1 Improvement2 EDPP 5.22 19.00 7.14 32.81 0 10 20 30 40 EDPP Imp.2 Imp.1 DPP Speedup (a) Prostate Cancer, X 2 R13215154 8.44 56.50 11.36 152.53 0 50 100 150 200 EDPP Imp.2 Imp.1 DPP Speedup (b) PIE, X 2 R102411553 7.71 67.89 11.35 230.78 0 50 100 150 200 250 EDPP Imp.2 Imp.1 DPP Speedup (c) MNIST, X 2 R78450000 Figure 1: Comparison of the family of DPP rules on three real data sets: Prostate Cancer digit data set (left), PIE data set (middle) and MNIST image data set (right). The
  • 226. rst row shows the rejection ratios of DPP, Improvement 1, Improvement 2 and EDPP. The second row presents the speedup gained by these four methods. Comparisons of the Family of DPP rules We evaluate the performance of the family of DPP screening rules, i.e., DPP, Improvement 1, Improvement 2 and EDPP, on three real data sets: a) the Prostate Cancer (Petricoin et al., 2002); b) the PIE face image data set (Sim et al., 2003); c) the MNIST handwritten digit data set (Lecun et al., 1998). To measure the performance of the screening rules, we compute the following two quantities: 1. the rejection ratio, i.e., the ratio of the number of features discarded by screening rules to the actual number of zero features in the ground truth; 2. the speedup, i.e., the ratio of the running time of the solver with screening rules to the running time of the solver without screening. For each data set, we run the solver with or without the screening rules to solve the Lasso problem along a sequence of 100 parameter values equally spaced on the =max scale from 0:05 to 1:0. Fig. 1 presents the rejection ratios and speedup by the family of DPP screening rules. Table 1 reports the running time of the solver with or without the screening rules for solving the 100 Lasso problems, as well as the time for running the screening rules. The Prostate Cancer Data Set The Prostate Cancer data set (Petricoin et al., 2002) is obtained by protein mass spectrometry. The features are indexed by time-of- ight values, which are related to the mass over charge ratios of the constituent proteins in the blood. The data set has 15154 measurements of 132 patients. 69 of the patients have prostate cancer and the rest are healthy. Therefore, the data matrix X is of size 132 15154, and the response vector y 2 f1;1g132 contains the binary labels of the patients. 16
  • 227. Lasso Screening Rules via Dual Polytope Projection Data solver DPP+solver Imp.1+solver Imp.2+solver EDPP+solver DPP Imp.1 Imp.2 EDPP Prostate Cancer 121.41 23.36 6.39 17.00 3.70 0.30 0.27 0.28 0.23 PIE 629.94 74.66 11.15 55.45 4.13 1.63 1.34 1.54 1.33 MNIST 2566.26 332.87 37.80 226.02 11.12 5.28 4.36 4.94 4.19 Table 1: Running time (in seconds) for solving the Lasso problems along a sequence of 100 tuning parameter values equally spaced on the scale of =max from 0:05 to 1 by (a): the solver (Liu et al., 2009) (reported in the second column) without screening; (b): the solver combined with dierent screening methods (reported in the 3rd to the 6th columns). The last four columns report the total running time (in seconds) for the screening methods. The PIE Face Image Data Set The PIE face image data set used in this experiment1 (Cai et al., 2007) contains 11554 gray face images of 68 people, taken under dierent poses, illumination conditions and expressions. Each of the images has 32 32 pixels. Therefore, in each trial, we
  • 228. rst randomly pick an image as the response y 2 R1024, and then use the remaining images to form the data matrix X 2 R102411553. We run 100 trials and report the average performance of the screening rules. The MNIST Handwritten Digit Data Set This data set contains grey images of scanned handwritten digits, including 60; 000 for training and 10; 000 for testing. The dimension of each image is 2828. We
  • 229. rst randomly select 5000 images for each digit from the training set (and in total we have 50000 images) and get a data matrix X 2 R78450000. Then in each trial, we randomly select an image from the testing set as the response y 2 R784. We run 100 trials and report the average performance of the screening rules. From Fig. 1, we can see that both Improvements 1 and 2 are able to discard more inactive features than DPP, and thus lead to a higher speedup. Compared to Improvement 2, we can also observe that Improvement 1 is more eective in discarding the inactive features. For the three data sets, the second row of Fig. 1 shows that Improvement 1 leads to about 20, 60, 70 times speedup respectively, which are much higher than the ones gained by Improvement 1 (roughly 10 times for all the three cases). Moreover, the EDPP rule, which combines the ideas of both Improvements 1 and 2, is even more eective in discarding the inactive features than Improvement 1. We can see that, for all of the three data sets and most of the 100 parameter values, the rejection ratios of EDPP are very close to 100%. In other words, EDPP is able to discard almost all of the inactive features. Thus, the resulting speedup of EDPP is signi
  • 230. cantly better than the ones gained by the other three DPP rules. For the PIE and MNIST data sets, we can see that the speedup gained EDPP is about 150 and 230 times, which are two orders of magnitude. In view of Table 1, for the MNIST data set, the solver without screening needs about 2566:26 seconds to solve the 100 Lasso problems. In contrast, the solver with EDPP only needs 11:12 seconds, leading to substantial savings in the computational cost. Moreover, from the last four columns of Table 1, we can also observe that the computational cost of the 1. http://www.cad.zju.edu.cn/home/dengcai/Data/FaceData.html 17
  • 231. Wang, Wonka and Ye family of DPP rules are very low. Compared to that of the solver without screening, the computational cost of the family of DPP rules is negligible. In Section 4, we will only compare the performance of EDPP against several other state-of-the-art screening rules. 3. Extensions to Group Lasso To demonstrate the exibility of the family of DPP rules, we extend the idea of EDPP to the group Lasso problem (Yuan and Lin, 2006) in this section. Although the Lasso and group Lasso problems are very dierent from each other, we will see that their dual problems share a lot of similarities. For example, both of the dual problems can be formulated as looking for projections onto nonempty closed convex subsets of a Hilbert space. Recall that, the EDPP rule for the Lasso problem is entirely based on the properties of the projection operators. Therefore, the framework of the EDPP screening rule we developed for Lasso is also applicable for the group Lasso problem. In Section 3.1, we brie y review some basics of the group Lasso problem and explore the geometric properties of its dual problem. In Section 3.2, we develop the EDPP rule for the group Lasso problem. 3.1 Basics With the group information available, the group Lasso problem takes the form of: inf
  • 232. 2Rp 1 2 y XG g=1 Xg
  • 233. g 2 2 + XG g=1 p ngk
  • 234. gk2; (50) where Xg 2 RNng is the data matrix for the gth group and p = PG g=1 ng. The dual problem of (50) is (see detailed derivation in the appendix): sup 1 2 kyk22 2 2 y 2 2 : kXTg k2 p ng; g = 1; 2; : : : ;G (51) The KKT conditions are given by y = XG g=1 Xg
  • 235. g () + (); (52) (())TXg 2 (p ng
  • 236. g () k
  • 237. g ()k2 ; if
  • 238. g ()6= 0; p ngu; kuk2 1; if
  • 239. g () = 0: (53) for g = 1; 2; : : : ;G. Clearly, in view of Eq. (53), we can see that k(())TXgk2 p ng )
  • 240. g () = 0 (R2) However, since () is generally unknown, (R2) is not applicable to identify the inactive groups, i.e., the groups which have 0 coecients in the solution vector, for the group Lasso problem. Therefore, similar to the Lasso problem, we can
  • 241. rst
  • 242. nd a region which contains (), and then (R2) can be relaxed as follows: sup 2 k()TXgk2 p ng )
  • 243. g () = 0: (R20) 18
  • 244. Lasso Screening Rules via Dual Polytope Projection Therefore, to develop screening rules for the group Lasso problem, we only need to estimate the region which contains (), solve the maximization problem in (R20), and plug it into (R20). In other words, the three steps proposed in Section 2.1 can also be applied to develop screening rules for the group Lasso problem. Moreover, (R20) also implies that the smaller the region is, the more accurate the estimation of the dual optimal solution is. As a result, the more eective the resulting screening rule is in discarding the inactive features. Geometric Interpretations For notational convenience, let F be the feasible set of problem (51). Similar to the case of Lasso, problem (51)implies that the dual optimal () is the projection of y= onto the feasible set F, i.e., () = PF y ; 8 0: (54) Compared to Eq. (6), the only dierence in Eq. (54) is that the feasible set F is the intersection of a set of ellipsoids, and thus not a polytope. However, similar to F, F is also a nonempty closed and convex (notice that 0 is a feasible point). Therefore, we can make use of all the aforementioned properties of the projection operators, e.g., Lemmas 9 and 13, Theorems 12 and 15, to develop screening rules for the group Lasso problem. Moreover, similar to the case of Lasso, we also have a speci
  • 245. c parameter value (Tibshirani et al., 2012) for the group Lasso problem, i.e., max = max g kXTg yk2 p ng : (55) Indeed, max is the smallest parameter value such that the optimal solution of problem (50) is 0. More speci
  • 247. () = 0; 8 2 [max;1): (56) Combining the result in (56) and Eq. (52), we immediately have () = y ; 8 2 [max;1): (57) Therefore, all through the subsequent sections, we will focus on the cases with 2 (0; max). 3.2 Enhanced DPP rule for Group Lasso In view of (R20), we can see that the estimation of the dual optimal solution is the key step to develop a screening rule for the group Lasso problem. Because () is the projection of y= onto the nonempty closed convex set F [please refer to Eq. (54)], we can make use of all the properties of projection operators, e.g., Lemmas 9 and 13, Theorems 12 and 15, to estimate the dual optimal solution. First, let us develop a useful technical result as follows. Lemma 21 For the group Lasso problem, let max be given by Eq. (55) and X := argmaxXg kXTg yk2 p ng : (58) 19
  • 248. Wang, Wonka and Ye Suppose the dual optimal solution () is known at 0 2 (0; max], let us de
  • 249. ne v1(0) = ( y 0 (0); if 0 2 (0; max); XXT y; if 0 = max: (59) 0(t) = (0) + tv1(0); t 0: (60) Then, we have the following result holds PF (0(t)) = (0); 8 t 0: (61) Proof Let us
  • 250. rst consider the cases with 0 2 (0; max). In view of the de
  • 251. nition of max, it is easy to see that y=0 =2 F. Therefore, in view of Eq. (54) and Lemma 9, the statement in Eq. (61) follows immediately. We next consider the case with 0 = max. By Theorem 12, we only need to check if v1(max) 2 NF ((max)) , v1(max); (max) 0; 8 2 F: (62) Indeed, in view of Eq. (55) and Eq. (57), we can see that hv1(max); (max)i = XXT y; y max = kXT yk22 max : (63) On the other hand, by Eq. (55) and Eq. (58), we can see that kXT yk2 = max p n; (64) where n is the number of columns of X. By plugging Eq. (64) into Eq. (63), we have hv1(max); (max)i = max n: (65) Moreover, for any feasible point 2 F, we can see that kXT k2 p n: (66) In view of the result in (66) and Eq. (64), it is easy to see that v1(max); = XXT y; = XT y;XT kXT yk2kXT k2 = max n: (67) Combining the result in Eq. (63) and (67), it is easy to see that the inequality in (62) holds for all 2 F, which completes the proof. By Lemma 21, we can accurately estimate the dual optimal solution of the group Lasso problem in the following theorem. It is easy to see that the result in Theorem 22 is very similar to the one in Theorem 18 for the Lasso problem. 20
  • 252. Lasso Screening Rules via Dual Polytope Projection Theorem 22 For the group Lasso problem, suppose the dual optimal solution () at 0 2 (0; max] is known, and v1(0) is given by Eq. (59). For any 2 (0; 0], let us de
  • 253. ne v2(; 0) = y (0); (68) v?2 (; 0) = v2(; 0) hv1(0); v2(; 0)i kv1(0)k22 v1(0): (69) Then, the dual optimal solution () can be estimated as follows: () (0) + 1 2 v?2 (; 0) 2 1 2 kv?2 (; 0)k2: (70) We omit the proof of Theorem 22 since it is exactly the same as the one of Theorem 18. Indeed, Theorem 22 is equivalent to estimating () in a ball as follows: () 2 B (0) + 1 2 v?2 (; 0); 1 2 kv?2 (; 0)k2 : (71) Based on this estimation and (R20), we immediately have the following result. Theorem 23 For the group Lasso problem, assume the dual optimal solution () is known at 0 2 (0; max], and 2 (0; 0]. Then
  • 254. g () = 0 if the following holds XTg (0) + 1 2 v?2 (; 0) 2 p ng 1 2 kv?2 (; 0)k2kXgk2: (72) Proof In view of (R20), we only need to check if XTg () 2 p ng: To simplify notations, let o = (0) + 1 2 v?2 (; 0); r = 1 2 kv?2 (; 0)k2: It is easy to see that XTg () 2 kXTg (() o)k2 + kXTg ok2 (73) kXgk2k() ok2 + p ng rkXgk2 rkXgk2 + p ng rkXgk2 = p ng; which completes the proof. The second and third inequalities in (73) are due to (72) and Theorem 22, respectively. In view of Eq. (52) and Theorem 23, we can derive the EDPP rule to discard the inactive groups for the group Lasso problem as follows. Corollary 24 EDPP: For the group Lasso problem (50), suppose we are given a sequence of parameter values max = 0 1 : : : K. For any integer 0 k K, we have
  • 255. g (k+1) = 0 if
  • 256. (k) is known and the following holds: XTg y PG g=1Xg
  • 257. g (k) k + 1 2 v?2 ! (k+1; k) 2 p ng 1 2 kv?2 (k+1; k)k2kXgk2: 21
  • 258. Wang, Wonka and Ye 4. Experiments In this section, we evaluate the proposed EDPP rules for Lasso and group Lasso on both synthetic and real data sets. To measure the performance of our screening rules, we compute the rejection ratio and speedup (please refer to Section 2.3.3 for details). Because the EDPP rule is safe, i.e., no active features/groups will be mistakenly discarded, the rejection ratio will be less than one. In Section 4.1, we conduct two sets of experiments to compare the performance of EDPP against several state-of-the-art screening methods. We
  • 259. rst compare the performance of the basic versions of EDPP, DOME, SAFE, and strong rule. Then, we focus on the sequential versions of EDPP, SAFE, and strong rule. Notice that, SAFE and EDPP are safe. However, strong rule may mistakenly discard features with nonzero coecients in the solution. Although DOME is also safe for the Lasso problem, it is unclear if there exists a sequential version of DOME. Recall that, real applications usually favor the sequential screening rules because we need to solve a sequence of of Lasso problems to determine an appropriate parameter value (Tibshirani et al., 2012). Moreover, DOME assumes special structure on the data, i.e., each feature and the response vector should be normalized to have unit length. In Section 4.2, we compare EDPP with strong rule for the group Lasso problem on synthetic data sets. We are not aware of any safe screening rules for the group Lasso problem at this point. For SAFE and Dome, it is not straightforward to extend them to the group Lasso problem. 4.1 EDPP for the Lasso Problem For the Lasso problem, we
  • 260. rst compare the performance of the basic versions of EDPP, DOME, SAFE and strong rule in Section 4.1.1. Then, we compare the performance of the sequential versions of EDPP, SAFE and strong rule in Section 4.1.2. 4.1.1 Evaluation of the Basic EDPP Rule In this section, we perform experiments on six real data sets to compare the performance of the basic versions of SAFE, DOME, strong rule and EDPP. Brie y speaking, suppose that we are given a parameter value . Basic versions of the aforementioned screening rules always make use of
  • 261. (max) to identify the zero components of
  • 262. (). Take EDPP for example. The basic version of EDPP can be obtained by replacing
  • 263. (k) and v? 2 (k+1; k) with
  • 264. (0) and v? 2 (k; 0), respectively, in (49) for all k = 1; : : : ;K. In this experiment, we report the rejection ratios of the basic SAFE, DOME, strong rule and EDPP along a sequence of 100 parameter values equally spaced on the =max scale from 0:05 to 1:0. We note that DOME requires that all features of the data sets have unit length. Therefore, to compare the performance of DOME with SAFE, strong rule and EDPP, we normalize the features of all the data sets used in this section. However, it is worthwhile to mention that SAFE, strong rule and EDPP do not assume any speci
  • 265. c structures on the data set. The data sets used in this section are listed as follows: a) Colon Cancer data set (Alon et al., 1999); b) Lung Cancer data set (Bhattacharjee et al., 2001); 22
  • 266. Lasso Screening Rules via Dual Polytope Projection c) Prostate Cancer data set (Petricoin et al., 2002); d) PIE face image data set (Sim et al., 2003; Cai et al., 2007); e) MNIST handwritten digit data set (Lecun et al., 1998); f) COIL-100 image data set (Nene et al., 1996; Cai et al., 2011). 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio SAFE DOME Strong Rule EDPP (a) Colon Cancer, X 2 R622000 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio SAFE DOME Strong Rule EDPP (b) Lung Cancer, X 2 R20312600 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio SAFE DOME Strong Rule EDPP (c) Prostate Cancer, X 2 R13215154 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio SAFE DOME Strong Rule EDPP (d) PIE, X 2 R102411553 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio SAFE DOME Strong Rule EDPP (e) MNIST, X 2 R78450000 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio SAFE DOME Strong Rule EDPP (f) COIL-100, X 2 R10247199 Figure 2: Comparison of basic versions of SAFE, DOME, Strong Rule and EDPP on six real data sets. The Colon Cancer Data Set This data set contains gene expression information of 22 normal tissues and 40 colon cancer tissues, and each has 2000 gene expression values. The Lung Cancer Data Set This data set contains gene expression information of 186 lung tumors and 17 normal lung specimens. Each specimen has 12600 expression values. The COIL-100 Image Data Set The data set consists of images of 100 objects. The images of each object are taken every 5 degree by rotating the object, yielding 72 images per object. The dimension of each image is 32 32. In each trial, we randomly select one image as the response vector and use the remaining ones as the data matrix. We run 100 trials and report the average performance of the screening rules. The description and the experimental settings for the Prostate Cancer data set, the PIE face image data set and the MNIST handwritten digit data set are given in Section 2.3.3. 23
  • 267. Wang, Wonka and Ye Fig. 2 reports the rejection ratios of the basic versions of SAFE, DOME, strong rule and EDPP. We can see that EDPP signi
  • 268. cantly outperforms the other three screening methods on
  • 269. ve of the six data sets, i.e., the Colon Cancer, Lung Cancer, Prostate Cancer, MNIST, and COIL-100 data sets. On the PIE face image data set, EDPP and DOME provide similar performance and both signi
  • 270. cantly outperform SAFE and strong rule. However, as pointed out by Tibshirani et al. (2012), the real strength of screening methods stems from their sequential versions. The reason is because the optimal parameter value is unknown in real applications. Typical approaches for model selection usually involve solving the Lasso problems many times along a sequence of parameter values. Thus, the sequential screening methods are more suitable in facilitating the aforementioned scenario and more useful than their basic-version counterparts in practice (Tibshirani et al., 2012). 4.1.2 Evaluation of the Sequential EDPP Rule In this section, we compare the performance of the sequential versions of SAFE, strong rule and EDPP by the rejection ratio and speedup. We
  • 271. rst perform experiments on two synthetic data sets. We then apply the three screening rules to six real data sets. Synthetic Data Sets First, we perform experiments on several synthetic problems, which have been commonly used in the sparse learning literature (Bondell and Reich, 2008; Zou and Hastie, 2005; Tibshirani, 1996). We simulate data from the true model y = X
  • 272. + ; N(0; 1): (74) We generate two data sets with 250 10000 entries: Synthetic 1 and Synthetic 2. For Synthetic 1, the entries of the data matrix X are i.i.d. standard Gaussian with pairwise correlation zero, i.e., corr(xi; xi) = 0. For Synthetic 2, the entries of the data matrix X are drawn from i.i.d. standard Gaussian with pairwise correlation 0:5jijj, i.e., corr(xi; xj) = 0:5jijj. To generate the response vector y 2 R250 by the model in (74), we need to set the parameter and construct the ground truth
  • 273. 2 R10000. Throughout this section, is set to be 0:1. To construct
  • 274. , we randomly select p components which are populated from a uniform [1; 1] distribution, and set the remaining ones as 0. After we generate the data matrix X and the response vector y, we run the solver with or without screening rules to solve the Lasso problems along a sequence of 100 parameter values equally spaced on the =max scale from 0:05 to 1:0. We then run 100 trials and report the average performance. We
  • 275. rst apply the screening rules, i.e., SAFE, strong rule and EDPP to Synthetic 1 with p = 100; 1000; 5000 respectively. Fig. 3(a), Fig. 3(b) and Fig. 3(c) present the corresponding rejection ratios and speedup of SAFE, strong rule and EDPP. We can see that the rejection ratios of strong rule and EDPP are comparable to each other, and both of them are more eective in discarding inactive features than SAFE. In terms of the speedup, EDPP provides better performance than strong rule. The reason is because strong rule is a heuristic screening method, i.e., it may mistakenly discard active features which have nonzero components in the solution. Thus, strong rule needs to check the KKT conditions to ensure the correctness of the screening result. In contrast, the EDPP rule does not need to check the KKT conditions since the discarded features are guaranteed to be absent from 24
  • 276. Lasso Screening Rules via Dual Polytope Projection 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio SAFE Strong Rule EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio SAFE Strong Rule EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio SAFE Strong Rule EDPP 1.09 44.09 40.87 0 10 20 30 40 50 EDPP Strong Rule SAFE Speedup (a) Synthetic 1, p = 100 1.11 45.54 41.64 0 10 20 30 40 50 EDPP Strong Rule SAFE Speedup (b) Synthetic 1, p = 1000 1.10 45.85 41.67 0 10 20 30 40 50 EDPP Strong Rule SAFE Speedup (c) Synthetic 1, p = 5000 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio SAFE Strong Rule EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio SAFE Strong Rule EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio SAFE Strong Rule EDPP 1.11 43.25 40.99 0 10 20 30 40 50 EDPP Strong Rule SAFE Speedup (d) Synthetic 2, p = 100 1.09 42.61 40.03 0 10 20 30 40 50 EDPP Strong Rule SAFE Speedup (e) Synthetic 2, p = 1000 1.11 44.63 41.47 0 10 20 30 40 50 EDPP Strong Rule SAFE Speedup (f) Synthetic 2, p = 5000 Figure 3: Comparison of SAFE, Strong Rule and EDPP on two synthetic datasets with dierent numbers of nonzero components of the groud truth. the resulting sparse representation. From the last two columns of Table 2, we can observe that the running time of strong rule is about twice of that of EDPP. Fig. 3(d), Fig. 3(e) and Fig. 3(f) present the rejection ratios and speedup of SAFE, strong rule and EDPP on Synthetic 2 with p = 100; 1000; 5000 respectively. We can observe patterns similar to Synthetic 1. Clearly, our method, EDPP, is very robust to the variations of the intrinsic structures of the data sets and the sparsity of the ground truth. Real Data Sets 25
  • 277. Wang, Wonka and Ye Data p solver SAFE+solver Strong Rule+solver EDPP+solver SAFE Strong Rule EDPP Synthetic 1 100 109.01 100.09 2.67 2.47 4.60 0.65 0.36 1000 123.60 111.32 2.97 2.71 4.59 0.66 0.37 5000 124.92 113.09 3.00 2.72 4.57 0.65 0.36 Synthetic 2 100 107.50 96.94 2.62 2.49 4.61 0.67 0.37 1000 113.59 104.29 2.84 2.67 4.57 0.63 0.35 5000 125.25 113.35 3.02 2.81 4.62 0.65 0.36 Table 2: Running time (in seconds) for solving the Lasso problems along a sequence of 100 tuning parameter values equally spaced on the scale of =max from 0:05 to 1 by (a): the solver (Liu et al., 2009) (reported in the third column) without screening; (b): the solver combined with dierent screening methods (reported in the 4th to the 6th columns). The last four columns report the total running time (in seconds) for the screening methods. In this section, we compare the performance of the EDPP rule with SAFE and strong rule on six real data sets along a sequence of 100 parameter values equally spaced on the =max scale from 0:05 to 1:0. The data sets are listed as follows: a) Breast Cancer data set (West et al., 2001; Shevade and Keerthi, 2003); b) Leukemia data set (Armstrong et al., 2002); c) Prostate Cancer data set (Petricoin et al., 2002); d) PIE face image data set (Sim et al., 2003; Cai et al., 2007); e) MNIST handwritten digit data set (Lecun et al., 1998); f) Street View House Number (SVHN) data set (Netzer et al., 2001). We present the rejection ratios and speedup of EDPP, SAFE and strong rule in Fig. 4. Table 3 reports the running time of the solver with or without screening for solving the 100 Lasso problems, and that of the screening rules. The Breast Cancer Data Set This data set contains 44 tumor samples, each of which is represented by 7129 genes. Therefore, the data matrix X is of 44 7129. The response vector y 2 f1;1g44 contains the binary label of each sample. The Leukemia Data Set This data set is a DNA microarray data set, containing 52 samples and 11225 genes. Therefore, the data matrix X is of 55 11225. The response vector y 2 f1;1g52 contains the binary label of each sample. The SVHN Data set The SVHN data set contains color images of street view house numbers, including 73257 images for training and 26032 for testing. The dimension of each image is 3232. In each trial, we
  • 278. rst randomly select an image as the response y 2 R3072, and then use the remaining ones to form the data matrix X 2 R307299288. We run 100 trials and report the average performance. 26
  • 279. Lasso Screening Rules via Dual Polytope Projection 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio SAFE Strong Rule EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio SAFE Strong Rule EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio SAFE Strong Rule EDPP 1.76 10.26 9.73 0 5 10 15 EDPP Strong Rule SAFE Speedup (a) Breast Cancer, X 2 R447129 1.84 16.53 14.84 0 5 10 15 20 EDPP Strong Rule SAFE Speedup (b) Leukemia, X 2 R5511225 2.57 25.16 32.83 0 10 20 30 40 EDPP Strong Rule SAFE Speedup (c) Prostate Cancer, X 2 R13215154 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio SAFE Strong Rule EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio SAFE Strong Rule EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio SAFE Strong Rule EDPP 4.55 152.53 130.20 0 50 100 150 200 EDPP Strong Rule SAFE Speedup (d) PIE, X 2 R102411553 3.65 230.73 169.37 0 50 100 150 200 250 EDPP Strong Rule SAFE Speedup (e) MNIST, X 2 R78450000 2.11 184.61 121.60 0 50 100 150 200 EDPP Strong Rule SAFE Speedup (f) SVHN, X 2 R307299288 Figure 4: Comparison of SAFE, Strong Rule, and EDPP on six real data sets. The description and the experiment settings for the Prostate Cancer data set, the PIE face image data set and the MNIST handwritten digit data set are given in Section 2.3.3. From Fig. 4, we can see that the rejection ratios of strong rule and EDPP are comparable to each other. Compared to SAFE, both of strong rule and EDPP are able to identify far more inactive features, leading to a much higher speedup. However, because strong rule needs to check the KKT conditions to ensure the correctness of the screening results, the speedup gained by EDPP is higher than that by strong rule. When the size of the data matrix is not very large, e.g., the Breast Cancer and Leukemia data sets, the speedup gained by EDPP are slightly higher than that by strong rule. However, when the size of the data matrix is large, e.g., the MNIST and SVHN data sets, the speedup gained by EDPP are 27
  • 280. Wang, Wonka and Ye Data solver SAFE+solver Strong Rule+solver EDPP+solver SAFE Strong Rule EDPP Breast Cancer 12.70 7.20 1.31 1.24 0.44 0.06 0.05 Leukemia 16.99 9.22 1.15 1.03 0.91 0.09 0.07 Prostate Cancer 121.41 47.17 4.83 3.70 3.60 0.46 0.23 PIE 629.94 138.33 4.84 4.13 19.93 2.54 1.33 MNIST 2566.26 702.21 15.15 11.12 64.81 8.14 4.19 SVHN 11023.30 5220.88 90.65 59.71 583.12 61.02 31.64 Table 3: Running time (in seconds) for solving the Lasso problems along a sequence of 100 tuning parameter values equally spaced on the scale of =max from 0:05 to 1 by (a): the solver (Liu et al., 2009) (reported in the second column) without screening; (b): the solver combined with dierent screening methods (reported in the 3rd to the 5th columns). The last three columns report the total running time (in seconds) for the screening methods. signi
  • 281. cantly higher than that by strong rule. Moreover, we can also observe from Fig. 4 that, the larger the data matrix is, the higher the speedup can be gained by EDPP. More speci
  • 282. cally, for the small data sets, e.g., the Breast Cancer, Leukemia and Prostate Cancer data sets, the speedup gained by EDPP is about 10, 17 and 30 times. In contrast, for the large data sets, e.g., the PIE, MNIST and SVHN data sets, the speedup gained by EDPP is two orders of magnitude. Take the SVHN data set for example. The solver without screening needs about 3 hours to solve the 100 Lasso problems. Combined with the EDPP rule, the solver only needs less than 1 minute to complete the task. Clearly, the proposed EDPP screening rule is very eective in accelerating the com-putation of Lasso especially for large-scale problems, and outperforms the state-of-the-art approaches like SAFE and strong rule. Notice that, the EDPP method is safe in the sense that the discarded features are guaranteed to have zero coecients in the solution. EDPP with Least-Angle Regression (LARS) As we mentioned in the introduction, we can combine EDPP with any existing solver. In this experiment, we integrate EDPP and strong rule with another state-of-the-art solver for Lasso, i.e., Least-Angle Regression (LARS) (Efron et al., 2004). We perform experiments on the same real data sets used in the last section with the same experiment settings. Because the rejection ratios of screening methods are irrelevant to the solvers, we only report the speedup. Table 4 reports the running time of LARS with or without screening for solving the 100 Lasso problems, and that of the screening methods. Fig. 5 shows the speedup of these two methods. We can still observe a substantial speedup gained by EDPP. The reason is that EDPP has a very low computational cost (see Table 4) and it is very eective in discarding inactive features (see Fig. 4). 28
  • 283. Lasso Screening Rules via Dual Polytope Projection Data LARS Strong Rule+LARS EDPP+LARS Strong Rule EDPP Breast Cancer 1.30 0.06 0.04 0.04 0.03 Leukemia 1.46 0.09 0.05 0.07 0.04 Prostate Cancer 5.76 1.04 0.37 0.42 0.24 PIE 22.52 2.42 1.31 2.30 1.21 MNIST 92.53 8.53 4.75 8.36 4.34 SVHN 1017.20 65.83 35.73 62.53 32.00 Table 4: Running time (in seconds) for solving the Lasso problems along a sequence of 100 tuning parameter values equally spaced on the scale of =max from 0:05 to 1 by (a): the solver (Efron et al., 2004; Mairal et al., 2010) (reported in the sec-ond column) without screening; (b): the solver combined with dierent screening methods (reported in the 3rd and 4th columns). The last two columns report the total running time (in seconds) for the screening methods. 21.67 32.50 0 10 20 30 40 EDPP Strong Rule Speedup (a) Breast Cancer, X 2 R447129 16.22 29.20 0 10 20 30 40 EDPP Strong Rule Speedup (b) Leukemia, X 2 R5511225 5.54 15.57 0 5 10 15 20 EDPP Strong Rule Speedup (c) Prostate Cancer, X 2 R13215154 9.31 17.19 0 5 10 15 20 EDPP Strong Rule Speedup (d) PIE, X 2 R102411553 10.85 19.48 0 5 10 15 20 25 EDPP Strong Rule Speedup (e) MNIST, X 2 R78450000 15.45 28.47 0 10 20 30 EDPP Strong Rule Speedup (f) SVHN, X 2 R307299288 Figure 5: The speedup gained by Strong Rule and EDPP combined with LARS on six real data sets. 4.2 EDPP for the Group Lasso Problem In this experiment, we evaluate the performance of EDPP and strong rule with dierent numbers of groups. The data matrix X is
  • 284. xed to be 250 200000. The entries of the response vector y and the data matrix X are generated i.i.d. from a standard Gaussian distribution. For each experiment, we repeat the computation 20 times and report the average results. Moreover, let ng denote the number of groups and sg be the average group size. For example, if ng is 10000, then sg = p=ng = 20. From Figure 6, we can see that EDPP and strong rule are able to discard more inactive groups when the number of groups ng increases. The intuition behind this observation is that the estimation of the dual optimal solution is more accurate with a smaller group size. Notice that, a large ng implies a small average group size. Figure 6 also implies that 29
  • 285. Wang, Wonka and Ye 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio Strong Rule EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio Strong Rule EDPP 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 0.8 0.6 0.4 0.2 0 6/6max Rejection Ratio Strong Rule EDPP 15.61 84.29 0 20 40 60 80 100 EDPP Strong Rule Speedup (a) ng = 10000 30.85 120.01 0 50 100 150 EDPP Strong Rule Speedup (b) ng = 20000 58.80 162.64 0 50 100 150 200 EDPP Strong Rule Speedup (c) ng = 40000 Figure 6: Comparison of EDPP and strong rules with dierent numbers of groups. ng solver Strong Rule+solver EDPP+solver Strong Rule EDPP 10000 4535.54 296.60 53.81 13.99 8.32 20000 5536.18 179.48 46.13 14.16 8.61 40000 6144.48 104.50 37.78 13.13 8.37 Table 5: Running time (in seconds) for solving the group Lasso problems along a sequence of 100 tuning parameter values equally spaced on the scale of =max from 0:05 to 1:0 by (a): the solver from SLEP (reported in the second column) without screening; (b): the solver combined with dierent screening methods (reported in the 3rd and 4th columns). The last two columns report the total running time (in seconds) for the screening methods. The data matrix X is of size 250 200000. compared to strong rule, EDPP is able to discard more inactive groups and is more robust with respect to dierent values of ng. Table 5 further demonstrates the eectiveness of EDPP in improving the eciency of the solver. When ng = 10000, the eciency of the solver is improved by about 80 times. When ng = 20000 and 40000, the eciency of the solver is boosted by about 120 and 160 times with EDPP respectively. 5. Conclusion In this paper, we develop new screening rules for the Lasso problem by making use of the properties of the projection operators with respect to a closed convex set. Our proposed methods, i.e., DPP screening rules, are able to eectively identify inactive predictors of the Lasso problem, thus greatly reducing the size of the optimization problem. Moreover, we further improve DPP rule and propose the enhanced DPP rule, which is more eective in 30
  • 286. Lasso Screening Rules via Dual Polytope Projection discarding inactive features than DPP rule. The idea of the family of DPP rules can be easily generalized to identify the inactive groups of the group Lasso problem. Extensive numerical experiments on both synthetic and real data demonstrate the eectiveness of the proposed rules. It is worthwhile to mention that the family of DPP rules can be combined with any Lasso solver as a speedup tool. In the future, we plan to generalize our ideas to other sparse formulations consisting of more general structured sparse penalties, e.g., tree/graph Lasso, fused Lasso. Acknowledgments We would like to acknowledge support for this project from the National Science Foundation (IIS-0953662, IIS-1421057, and IIS-1421100) and the National Institutes of Health (R01 LM010730). Appendix A. In this appendix, we give the detailed derivation of the dual problem of Lasso. A1. Dual Formulation Assuming the data matrix is X 2 RNp, the standard Lasso problem is given by: inf
  • 287. 2Rp 1 2 ky X
  • 289. k1: (75) For completeness, we give a detailed deviation of the dual formulation of (75) in this section. Note that problem (75) has no constraints. Therefore the dual problem is trivial and useless. A common trick (Boyd and Vandenberghe, 2004) is to introduce a new set of variables z = y X
  • 290. such that problem (75) becomes: inf
  • 291. 1 2 kzk22 + k
  • 292. k1; (76) subject to z = y X
  • 293. : By introducing the dual variables 2 RN, we get the Lagrangian of problem (76): L(
  • 294. ; z; ) = 1 2 kzk22 + k
  • 295. k1 + T (y X
  • 296. z): (77) For the Lagrangian, the primal variables are
  • 297. and z. And the dual function g() is: g() = inf
  • 298. ;z L(
  • 299. ; z; ) = T y + inf
  • 300. (TX
  • 301. + k
  • 302. k1) + inf z 1 2 kzk22 T z : (78) In order to get g(), we need to solve the following two optimization problems. inf
  • 303. TX
  • 304. + k
  • 306. Wang, Wonka and Ye and inf z 1 2 kzk22 T z: (80) Let us
  • 307. rst consider problem (79). Denote the objective function of problem (79) as f1(
  • 308. ) = TX
  • 309. + k
  • 311. ) is convex but not smooth. Therefore let us consider its subgradient @f1(
  • 312. ) = XT + v; in which kvk1 1 and vT
  • 313. = k
  • 314. k1, i.e., v is the subgradient of k
  • 315. k1. The necessary condition for f1 to attain an optimum is 9
  • 316. 0; such that 0 2 @f1(
  • 317. 0) = fXT + v0g; where v0 2 @k
  • 318. 0k1. In other words,
  • 319. 0; v0 should satisfy v0 = XT ; kv0k1 1; v0T
  • 320. 0 = k
  • 321. 0k1; which is equivalent to jxTi j ; i = 1; 2; : : : ; p: (82) Then we plug v0 = XT and v0T
  • 322. 0 = k
  • 323. 0k1 into Eq. (81): f1(
  • 325. f1(
  • 326. ) = TX
  • 327. 0 + XT T
  • 328. 0 = 0: (83) Therefore, the optimum value of problem (79) is 0. Next, let us consider problem (80). Denote the objective function of problem (80) as f2(z). Let us rewrite f2(z) as: f2(z) = 1 2 (kz k22 kk22 ): (84) Clearly, z0 = argmin z f2(z) = ; and inf z f2(z) = 1 2 kk22 : Combining everything above, we get the dual problem: sup g() = T y 1 2 kk22 ; (85) subject to jxTi j ; i = 1; 2; : : : ; p: 32
  • 329. Lasso Screening Rules via Dual Polytope Projection which is equivalent to sup g() = 1 2 kyk22 1 2 k yk22 ; (86) subject to jxTi j ; i = 1; 2; : : : ; p: By a simple re-scaling of the dual variables , i.e., let = , problem (86) transforms to: sup g() = 1 2 kyk22 2 2 k y k22 ; (87) subject to jxTi j 1; i = 1; 2; : : : ; p: A2. The KKT Conditions Problem (76) is clearly convex and its constraints are all ane. By Slater's condition, as long as problem (76) is feasible we will have strong duality. Denote
  • 330. , z and as optimal primal and dual variables. The Lagrangian is L(
  • 331. ; z; ) = 1 2 kzk22 + k
  • 332. k1 + T (y X
  • 333. z): (88) From the KKT condition, we have 0 2 @
  • 334. L(
  • 335. ; z; ) = XT + v; in which kvk1 1 and vT
  • 336. = k
  • 338. ; z; ) = z = 0; (90) rL(
  • 339. ; z; ) = (y X
  • 340. z) = 0: (91) From Eq. (90) and (91), we have: y = X
  • 341. + : (92) From Eq. (89), we know there exists v 2 @k
  • 342. k1 such that XT = v; kvk1 1 and (v)T
  • 343. = k
  • 344. k1; which is equivalent to jxTi j 1; i = 1; 2; : : : ; p; and ()TX
  • 345. = k
  • 346. k1: (93) From Eq. (93), it is easy to conclude: ()T xi 2 ( sign(
  • 347. i ) if
  • 348. i6= 0; [1; 1] if
  • 349. i = 0: (94) 33
  • 350. Wang, Wonka and Ye Appendix B. In this appendix, we present the detailed derivation of the dual problem of group Lasso. B1. Dual Formulation Assuming the data matrix is Xg 2 RNng and p = PG g=1 ng, the group Lasso problem is given by: inf
  • 351. 2Rp 1 2 ky XG g=1 Xg
  • 352. gk22 + XG g=1 p ngk
  • 353. gk2: (95) Let z = y PG g=1Xg
  • 354. g and problem (95) becomes: inf
  • 355. 1 2 kzk22 + XG g=1 p ngk
  • 356. gk2; (96) subject to z = y XG g=1 Xg
  • 357. g: By introducing the dual variables 2 RN, the Lagrangian of problem (96) is: L(
  • 358. ; z; ) = 1 2 kzk22 + XG g=1 p ngk
  • 359. gk2 + T (y XG g=1 Xg
  • 360. g z): (97) and the dual function g() is: g() = inf
  • 361. ;z L(
  • 362. ; z; ) = T y + inf
  • 363. T XG g=1 Xg
  • 364. g + XG g=1 p ngk
  • 365. gk2 + inf z 1 2 kzk22 : T z (98) In order to get g(), let us solve the following two optimization problems. inf
  • 366. T XG g=1 Xg
  • 367. g + XG g=1 p ngk
  • 368. gk2; (99) and inf z 1 2 kzk22 T z: (100) Let us
  • 369. rst consider problem (99). Denote the objective function of problem (99) as ^ f(
  • 370. ) = T XG g=1 Xg
  • 371. g + XG g=1 p ngk
  • 372. gk2; (101) Let ^ fg(
  • 374. g + p ngk
  • 375. gk2; g = 1; 2; : : : ; G: 34
  • 376. Lasso Screening Rules via Dual Polytope Projection then we can split problem (99) into a set of subproblems. Clearly ^ fg(
  • 377. g) is convex but not smooth because it has a singular point at 0. Consider the subgradient of ^ fg, @ ^ fg(
  • 378. g) = XTg + p ngvg; g = 1; 2; : : : ; G; where vg is the subgradient of k
  • 379. gk2: vg 2 (
  • 380. g k
  • 381. gk2 if
  • 382. g6= 0; u; kuk2 1 if
  • 383. g = 0: (102) Let
  • 384. 0g be the optimal solution of ^ fg, then
  • 386. 0g k2; XTg + p ngv0g = 0: If
  • 387. 0g = 0, clearly, ^ fg(
  • 388. 0g ) = 0. Otherwise, since p ngv0g = XTg and v0g =
  • 389. 0g k
  • 390. 0g k2 , we have ^ fg(
  • 391. 0g ) = p ng (
  • 393. 0g k2
  • 394. 0g + p ngk
  • 395. 0g k2 = 0: All together, we can conclude the inf
  • 397. g) = 0; g = 1; 2; : : : ;G and thus inf
  • 398. ^ f(
  • 400. XG g=1 ^ fg(
  • 401. g) = XG g=1 inf
  • 403. g) = 0: The second equality is due to the fact that
  • 404. g's are independent. Note, from Eq. (102), it is easy to see kvgk2 1. Since p ngv0g = XTg , we get a constraint on , i.e., should satisfy: kXTg p ng; g = 1; 2; : : : ; G: k2 Next, let us consider problem (100). Since problem (100) is exactly the same as problem (80), we conclude: z0 = argmin z 1 2 kzk22 T z = ; and inf z 1 2 kzk22 T z = 1 2 kk22 : Therefore the dual function g() is: g() = T y 1 2 kk22 : Combining everything above, we get the dual formulation of the group Lasso: sup g() = T y 1 2 kk22 ; (103) subject to kXTg p ng; g = 1; 2; : : : ; G: k2 35
  • 405. Wang, Wonka and Ye which is equivalent to sup g() = 1 2 kyk22 1 2 k yk22 ; (104) subject to kXTg p ng; g = 1; 2; : : : ; G: k2 By a simple re-scaling of the dual variables , i.e., let = , problem (104) transforms to: sup g() = 1 2 kyk22 2 2 k y k22 ; (105) subject to kXTg k2 p ng; g = 1; 2; : : : ; G: B2. The KKT Conditions Clearly, problem (96) is convex and its constraints are all ane. By Slater's condition, as long as problem (96) is feasible we will have strong duality. Denote
  • 406. , z and as optimal primal and dual variables. The Lagrangian is L(
  • 407. ; z; ) = 1 2 kzk22 + XG g=1 p ngk
  • 408. gk2 + T (y XG g=1 Xg
  • 409. g z): (106) From the KKT condition, we have 0 2 @
  • 410. gL(
  • 411. ; z; ) = XTg + p ngvg; in which vg 2 @k
  • 412. gk2; g = 1; 2; : : : ; G; (107) rzL(
  • 413. ; z; ) = z = 0; (108) rL(
  • 414. ; z; ) = (y XG g=1 g z) = 0: (109) Xg
  • 415. From Eq. (108) and (109), we have: y = XG g=1 Xg
  • 416. g + : (110) From Eq. (107), we know there exists v0g 2 @k
  • 417. gk2 such that XTg = p ngv0g and v0g 2 (
  • 418. g k
  • 420. g6= 0; u; kuk2 1 if
  • 421. g = 0; Then the following holds: XTg 2 (p ng
  • 422. g k
  • 424. g6= 0; p ngu; kuk2 1 if
  • 425. g = 0; (111) for g = 1; 2; : : : ;G. Clearly, if kXTg k2 p ng, we can conclude
  • 426. g = 0. 36
  • 427. Lasso Screening Rules via Dual Polytope Projection References U. Alon, N. Barkai, D. Notterman, K. Gish, S. Ybarra, D. Mack, and A. Levine. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Cell Biology, 96:6745{6750, 1999. S. Armstrong, J. Staunton, L. Silverman, R. Pieters, M. den Boer, M. Minden, S. Sallan, E. Lander, T. Golub, and S. Korsmeyer. MLL translocations specify a distinct gene expression pro
  • 428. le that distinguishes a unique leukemia. Nature Genetics, 30:41{47, 2002. H. H. Bauschke and P. L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, 2011. S. R. Becker, E. Candes, and M. Grant. Templates for convex cone problems with applica-tions to sparse signal recovery. Technical report, Standford University, 2010. D. P. Bertsekas. Convex Analysis and Optimization. Athena Scienti
  • 429. c, 2003. A. Bhattacharjee, W. Richards, J. Staunton, C. Li, S. Monti, P. Vasa, C. Ladd, J. Beheshti, R. Bueno, M. Gillette, M. Loda, G. Weber, E. Mark, E. Lander, W. Wong, B. Johnson, T. Golub, D. Sugarbaker, and M. Meyerson. Classi
  • 430. cation of human lung carcinomas by mrna expression pro
  • 431. ling reveals distinct adenocarcinoma subclasses. Proceedings of the National Academy of Sciences, 98:13790{13795, 2001. H. Bondell and B. Reich. Simultaneous regression shrinkage, variable selection and cluster-ing of predictors with OSCAR. Biometrics, 64:115{123, 2008. S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. A. Bruckstein, D. Donoho, and M. Elad. From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Review, 51:34{81, 2009. D. Cai, X. He, and J. Han. Ecient kernel discriminant analysis via spectral regression. In ICDM, 2007. D. Cai, X. He, J. Han, and T. Huang. Graph regularized non-negative matrix factorization for data representation. IEEE Transactions on Pattern Analysis and Machine Intelli- gence, 33:1548{1560, 2011. E. Candes. Compressive sampling. In Proceedings of the International Congress of Mathe- matics, 2006. S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM Review, 43:129{159, 2001. D. L. Donoho and Y. Tsaig. Fast solution of l-1 norm minimization problems when the solution may be sparse. IEEE Transactions on Information Theory, 54:4789{4812, 2008. B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics, 32:407{499, 2004. 37
  • 432. Wang, Wonka and Ye L. El Ghaoui, V. Viallon, and T. Rabbani. Safe feature elimination in sparse supervised learning. Paci
  • 433. c Journal of Optimization, 8:667{698, 2012. J. Fan and J. Lv. Sure independence screening for ultrahigh dimensional feature spaces. Journal of the Royal Statistical Society Series B, 70:849{911, 2008. J. Friedman, T. Hastie, H. He ing, and R. Tibshirani. Pathwise coordinate optimization. Annals of Applied Statistics, 1:302{332, 2007. J. Friedman, T. Hastie, and R. Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33:1{22, 2010. S. J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky. An interior-point method for large scale l1-regularized least squares. IEEE Journal on Selected Topics in Signal Processing, 1:606{617, 2007. Y. Lecun, L. Bottou, Y. Bengio, and P. Haner. Gradient-based learning applied to docu-ment recognition. In Proceedings of the IEEE, 1998. J. Liu, S. Ji, and J. Ye. SLEP: Sparse Learning with Ecient Projections. Arizona State University, 2009. J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research, 11:19{60, 2010. S. Nene, S. Nayar, and H. Murase. Columbia object image library (coil-100). Technical report, CUCS-006-96, Columbia University, 1996. Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Ng. Reading digits in nature images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2001. M. Y. Park and T. Hastie. L1-regularized path algorithm for generalized linear models. Journal of the Royal Statistical Society Series B, 69:659{677, 2007. E. Petricoin, D. Ornstein, C. Paweletz, A. Ardekani, P. Hackett, B. Hitt, A. Velassco, C. Trucco, L. Wiegand, K. Wood, C. Simone, P. Levine, W. Linehan, M. Emmert-Buck, S. Steinberg, E. Kohn, and L. Liotta. Serum proteomic patterns for detection of prostate cancer. Journal of National Cancer Institute, 94:1576{1578, 2002. A. Ruszczynski. Nonlinear Optimization. Princeton University Press, 2006. S. Shevade and S. Keerthi. A simple and ecient algorithm for gene selection using sparse logistic regression. Bioinformatics, 19:2246{2253, 2003. T. Sim, B. Baker, and M. Bsat. The CMU pose, illumination, and expression database. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25:1615{1618, 2003. R. Tibshirani. Regression shringkage and selection via the lasso. Journal of the Royal Statistical Society Series B, 58:267{288, 1996. 38
  • 434. Lasso Screening Rules via Dual Polytope Projection R. Tibshirani, J. Bien, J. Friedman, T. Hastie, N. Simon, J. Taylor, and R. Tibshirani. Strong rules for discarding predictors in lasso-type problems. Journal of the Royal Sta- tistical Society Series B, 74:245{266, 2012. M. West, C. Blanchette, H. Dressman, E. Huang, S. Ishida, R. Spang, H. Zuzan, J. Olson, J. Marks, and J. Nevins. Predicting the clinical status of human breast cancer by using gene expression pro
  • 435. les. Proceedings of the National Academy of Sciences, 98:11462{ 11467, 2001. J. Wright, Y. Ma, J. Mairal, G. Sapiro, T. Huang, and S. Yan. Sparse representation for computer vision and pattern recognition. In Proceedings of IEEE, 2010. Z. J. Xiang and P. J. Ramadge. Fast lasso screening tests based on correlations. In IEEE ICASSP, 2012. Z. J. Xiang, H. Xu, and P. J. Ramadge. Learning sparse representation of high dimensional data on large scale dictionaries. In NIPS, 2011. M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society Series B, 68:49{67, 2006. P. Zhao and B. Yu. On model selection consistency of lasso. Journal of Machine Learning Research, 7:2541{2563, 2006. H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society Series B, 67:301{320, 2005. 39