1. Introduction to Multinomial Logistic Regression
2. Understanding the Mathematics Behind Multinomial Models
3. Data Preparation for Multinomial Logistic Regression
5. Interpreting Multinomial Logistic Regression Outputs
7. Integrating GAMs with Multinomial Logistic Regression
8. Multinomial Logistic Regression in Action
9. Advanced Techniques and Future Directions in Multinomial Analysis
multinomial Logistic regression (MLR) is a statistical technique that expands on traditional logistic regression to accommodate multiple categorical outcomes. This is particularly useful when the dependent variable in question is not binary and includes three or more unordered categories. MLR is a go-to method when the goal is to model the probabilities of the different possible outcomes of a categorically distributed dependent variable, given a set of independent variables.
From a practical standpoint, MLR can be seen in action in various fields such as market research, where businesses may want to understand consumer preferences across multiple product categories. From a theoretical perspective, it represents an extension of the logistic model to multiclass problems, allowing for a more nuanced understanding of complex relationships.
Here's an in-depth look at MLR:
1. Model Structure: The MLR model predicts the probability of each category of the dependent variable, using a reference category for comparison. The probabilities are modeled using a softmax function, which generalizes the logistic function for multiple categories.
2. Estimation: The coefficients of an MLR model are estimated using maximum likelihood estimation (MLE). This involves finding the set of parameters that make the observed outcomes most probable.
3. Interpretation: The interpretation of MLR coefficients can be more challenging than in binary logistic regression. Each coefficient represents the change in the log odds of the outcome relative to the reference category, for a one-unit change in the predictor.
4. Assumptions: MLR assumes independence of irrelevant alternatives (IIA), which means that the relative probabilities between outcomes are independent of the presence or absence of other alternatives.
5. Diagnostics: Model diagnostics for MLR involve checking for the IIA assumption, multicollinearity among predictors, and the overall fit of the model.
6. Extensions: MLR can be extended to ordered outcomes using ordinal logistic regression, or to count data using poisson or negative binomial regression models.
Example: Consider a study on transportation mode choice, where the options are 'Car', 'Bus', and 'Bike'. An MLR model could help predict the probability of choosing each mode based on factors like distance, cost, and individual preferences. If the model finds that the odds of choosing a bike over a car increase by 2 times for every kilometer closer to work, it provides actionable insights into urban planning and policy-making.
In summary, MLR is a powerful tool for analyzing and predicting categorical outcomes with multiple levels. Its ability to handle complex, real-world scenarios makes it an indispensable technique in the arsenal of data analysts and researchers.
Introduction to Multinomial Logistic Regression - Multinomial Logistic Regression: Multiple Choices: Multinomial Logistic Regression and GAMs
Multinomial models are a cornerstone of modern statistical analysis, particularly when it comes to understanding outcomes that can fall into more than two categories. Unlike binary logistic regression, which deals with dichotomous outcomes, multinomial logistic regression allows for the analysis of categorical dependent variables with more than two levels. This makes it an invaluable tool in fields as diverse as medical research, where it might be used to predict disease stages, to marketing, where it could help ascertain consumer preferences among multiple products.
The mathematics behind multinomial models is both elegant and complex. At its core, the model uses a series of equations to estimate the probability of each possible outcome. These probabilities must sum to one, ensuring that one of the possible outcomes will occur. The model does this through the use of logit functions, which link the probabilities of the various outcomes to a set of independent variables.
1. Probability Estimations: The foundation of multinomial models lies in the probability estimations for each category. For a dependent variable with \( m \) categories, the model estimates \( m-1 \) probabilities, with the last category typically serving as a reference group. The probabilities are modeled using the logit function:
$$ P(Y_i = k) = \frac{e^{(\beta_{k0} + \beta_{k1}X_1 + ... + \beta_{kn}X_n)}}{1 + \sum_{j=1}^{m-1} e^{(\beta_{j0} + \beta_{j1}X_1 + ... + \beta_{jn}X_n)}} $$
Where \( P(Y_i = k) \) is the probability of the \( i \)-th observation being in category \( k \), \( X_1, ..., X_n \) are the independent variables, and \( \beta \) are the coefficients to be estimated.
2. Interpretation of Coefficients: The coefficients in a multinomial logistic regression are interpreted as the change in the log odds of the outcome relative to the reference category. For example, a coefficient of 0.5 for a particular independent variable would indicate that, holding all other variables constant, the odds of the outcome occurring in that category are \( e^{0.5} \) times the odds of it occurring in the reference category.
3. model fitting: The fitting of a multinomial model is typically done through maximum likelihood estimation. This method seeks to find the set of coefficients that make the observed outcomes most probable. The iterative process of estimation often involves algorithms such as Newton-Raphson or Expectation-Maximization.
4. Goodness-of-Fit: Assessing the fit of the model is crucial. Measures such as the likelihood Ratio test compare the fitted model to a null model with no predictors, while pseudo-R-squared values provide an indication of how much of the variability in the data the model accounts for.
5. Extensions and Considerations: Multinomial models can be extended in various ways, including the incorporation of random effects in a multinomial mixed model or the use of penalized likelihood methods to handle data with many predictors. It's also important to consider the assumptions of the model, such as the independence of irrelevant alternatives (IIA), which assumes that the relative probabilities of two outcomes are unaffected by the presence or absence of other alternatives.
To illustrate these concepts, let's consider an example from the field of education. Imagine we want to predict the type of higher education institution students will attend based on their academic performance, socioeconomic status, and region. Our categories might include community college, state university, and private university. By applying a multinomial logistic regression, we can estimate the probabilities of a student attending each type of institution based on their background characteristics.
The mathematics behind multinomial models is a blend of probability theory, optimization, and statistical inference. It provides a powerful framework for analyzing categorical data with multiple outcomes, offering insights that can inform decision-making across a multitude of disciplines. Understanding these models requires a solid grasp of the underlying mathematics, but the rewards in terms of analytical power are substantial. Multinomial logistic regression, in particular, extends the logistic model to handle multiple categories gracefully, making it a versatile tool in the statistician's arsenal.
Understanding the Mathematics Behind Multinomial Models - Multinomial Logistic Regression: Multiple Choices: Multinomial Logistic Regression and GAMs
Data preparation is a critical step in the modeling process, especially for multinomial logistic regression, where the outcome can take on three or more possible types. This type of regression is particularly useful in situations where the dependent variable is categorical and unordered, such as predicting the species of an iris flower, the brand of a product a customer might purchase, or the political party a voter might choose. The process involves several nuanced steps to ensure that the data accurately represents the problem at hand and that the model can learn from it effectively.
1. Understanding the Variables: Before any manipulation, it's crucial to understand the nature of the variables involved. For multinomial logistic regression, the dependent variable should be categorical with more than two levels, and the independent variables can be either continuous or categorical.
2. Data Cleaning: This involves handling missing values, outliers, and errors in the data. For instance, missing values can be imputed based on the median or mode, or rows with missing values can be removed entirely, depending on the context and amount of missing data.
3. Data Transformation: Often, the independent variables need to be transformed to improve the model's performance. This could involve normalizing or standardizing continuous variables to have a mean of zero and a standard deviation of one, or encoding categorical variables using techniques like one-hot encoding.
4. Multicollinearity Check: Multinomial logistic regression assumes that the independent variables are not highly correlated with each other. This can be checked using variance Inflation factor (VIF) scores, and variables with high VIF scores may need to be removed or combined.
5. Feature Selection: Not all variables contribute equally to the prediction. Techniques like backward elimination, forward selection, or regularization methods like LASSO can be used to select the most relevant features.
6. Sampling: If the dataset is imbalanced, with some outcomes being much more common than others, techniques like oversampling the minority class or undersampling the majority class can be used to balance the dataset.
7. Splitting the Data: It's important to split the dataset into training and testing sets to evaluate the model's performance on unseen data. A common split ratio is 70% for training and 30% for testing.
8. Creating Dummy Variables: For categorical independent variables, it's necessary to create dummy variables to represent the categories numerically. Each category becomes a new variable with a binary indicator.
9. Interaction Terms: Sometimes, the effect of one variable on the outcome depends on another variable. In such cases, interaction terms can be created to capture these effects.
10. Final Checks: Before running the model, a final check should be done to ensure that all variables are correctly formatted and that there are no remaining issues with the data.
For example, consider a dataset where we want to predict a customer's choice of transportation (car, bus, or bike) based on their income, age, and distance to work. After cleaning the data and checking for multicollinearity, we might find that age and income are not highly correlated with each other, so we keep both in the model. We might also create interaction terms between age and distance to work if we believe that the effect of distance on transportation choice might differ for different age groups.
By meticulously preparing the data, we lay a solid foundation for the multinomial logistic regression model, increasing the chances of obtaining a reliable, interpretable, and useful model.
FasterCapital matches you with over 32K VCs worldwide and provides you with all the support you need to approach them successfully
In the realm of statistical modeling, the transition from binary logistic regression to multinomial logistic regression represents a significant leap in complexity and capability. While binary logistic regression is well-suited for scenarios with two possible outcomes, real-world situations often require the consideration of multiple categories. This is where multinomial logistic regression comes into play, offering a robust framework for modeling scenarios where the dependent variable can take on three or more unordered categories.
The core principle of multinomial logistic regression lies in its ability to handle multiple outcomes by employing a series of binary regressions, conceptualized as 'one versus rest' comparisons. This approach allows for the estimation of probabilities for each category in relation to a reference category, thus providing a comprehensive view of the factors influencing each choice.
1. Understanding the Odds Ratios: In multinomial logistic regression, the odds ratios are calculated for each outcome category against the reference category. This provides insights into how predictor variables affect the likelihood of each possible outcome.
2. Model Estimation: The estimation process involves maximizing the likelihood function, which is more complex due to the multiple categories but can be tackled using iterative algorithms like Newton-Raphson or Expectation-Maximization.
3. Interpretation of Coefficients: Each coefficient in the model represents the change in the log odds of the outcome category compared to the reference category, for a one-unit change in the predictor variable.
4. Model Diagnostics: assessing model fit and accuracy is crucial. Measures such as the akaike Information criterion (AIC) or the bayesian Information criterion (BIC) can be used to compare models.
5. Predictive Power: The model's predictive power can be evaluated using cross-validation techniques or partitioning the data into training and test sets to assess how well the model generalizes to new data.
Example: Consider a marketing survey where consumers are asked to choose their preferred type of beverage from options like tea, coffee, and soda. A multinomial logistic regression model could help identify the factors that influence their choice. For instance, the model might reveal that age significantly affects the preference for coffee over tea, with older respondents showing a higher likelihood of choosing coffee.
The shift from binary to multinomial logistic regression opens up a plethora of opportunities for deeper analysis and understanding of multi-category dependent variables. It's a powerful tool that, when used correctly, can unveil the intricate dynamics of choice in various fields, from marketing to social sciences.
Interpreting the outputs of a multinomial logistic regression requires a nuanced understanding of the model's structure and the context of the data. Unlike binary logistic regression, which predicts the probability of a binary outcome, multinomial logistic regression deals with multiple categories that are not ordered. This makes the interpretation slightly more complex, as each category has its own set of regression coefficients. The coefficients represent the log odds of being in a particular category when compared to a reference category, given a one-unit change in the predictor variable.
To gain insights from the model, it's crucial to examine the significance of the coefficients, the odds ratios, and the model's overall fit. From a practical standpoint, stakeholders from different domains might focus on various aspects of the output. For instance, a statistician might be interested in the p-values and confidence intervals to assess the reliability of the coefficients, while a business analyst might look at the predicted probabilities to make informed decisions.
Here's an in-depth look at interpreting the outputs:
1. Coefficients and Odds Ratios: For each predictor, the model estimates coefficients for all categories except the reference category. The exponentiated coefficients, known as odds ratios, are interpreted as the factor by which the odds of the outcome category increase (if greater than 1) or decrease (if less than 1) for a one-unit increase in the predictor.
Example: If the odds ratio for a predictor is 2.0 for category A compared to the reference category, it means that the odds of being in category A are twice as high for each additional unit of the predictor.
2. Significance Tests: The p-values associated with each coefficient test the null hypothesis that the coefficient is equal to zero (no effect). A small p-value (typically < .05) indicates that we can reject the null hypothesis and consider the coefficient to be significantly different from zero.
3. Confidence Intervals: These provide a range of values within which the true coefficient is likely to fall. A 95% confidence interval means that if we were to take 100 different samples and compute the interval each time, approximately 95 of those intervals would contain the true coefficient.
4. Model Fit: Goodness-of-fit measures, like the -2 Log Likelihood, Akaike Information Criterion (AIC), and Bayesian Information Criterion (BIC), help in comparing models. Lower values generally indicate a better fit.
5. Predicted Probabilities: After fitting the model, predicted probabilities for each category can be calculated. These probabilities can be used to classify observations into categories based on the highest probability.
6. Multicollinearity: It's important to check for multicollinearity among predictors, as it can inflate the variance of the coefficient estimates and make them unstable.
7. Interaction Effects: Sometimes, the relationship between a predictor and the outcome is different depending on the level of another predictor. Including interaction terms in the model can reveal these effects.
8. Diagnostic Plots: Various plots, like the residual plot or the predicted vs. Actual plot, can be used to diagnose issues with the model, such as poor fit or outliers.
In practice, let's say we have a dataset with three categories of a dependent variable: 'Buy', 'Lease', and 'No Purchase' (reference category), and we're looking at the effect of income (predictor). If the coefficient for 'Buy' is positive and significant, it suggests that higher income increases the likelihood of buying over not purchasing. If the coefficient for 'Lease' is negative, it suggests that higher income decreases the likelihood of leasing compared to not purchasing.
By carefully examining these aspects, one can extract meaningful insights from the multinomial logistic regression outputs, which can guide strategic decisions and further analyses. Remember, the key is to understand the context of the data and the implications of the model's findings within that context.
Interpreting Multinomial Logistic Regression Outputs - Multinomial Logistic Regression: Multiple Choices: Multinomial Logistic Regression and GAMs
generalized Additive models (GAMs) represent a flexible class of models that extend linear regression by allowing non-linear functions of the predictor variables while maintaining interpretability. Unlike traditional regression models that impose a strict parametric form on the relationship between predictors and the response variable, GAMs use smooth functions to capture the non-linear effects. This flexibility makes GAMs particularly useful in situations where the relationship between the predictors and the outcome is suspected to be non-linear or complex.
Insights from Different Perspectives:
1. Statisticians appreciate GAMs for their ability to reveal the shape of the relationship between variables, which can be crucial for understanding underlying processes.
2. Data Scientists value GAMs for their predictive power and the ease with which they can incorporate them into a machine learning pipeline.
3. Domain Experts often find GAMs appealing because the results are interpretable and can be communicated effectively to non-technical stakeholders.
In-Depth Information:
1. Model Structure: At the heart of GAMs is the formula:
$$ y = \beta_0 + f_1(x_1) + f_2(x_2) + ... + f_p(x_p) + \epsilon $$
Where \( y \) is the response variable, \( \beta_0 \) is the intercept, \( f_i \) are smooth functions for each predictor \( x_i \), and \( \epsilon \) is the error term.
2. Smooth Functions: The smooth functions \( f_i \) are typically estimated using splines or kernel methods, which provide a way to "let the data speak" by fitting a smooth curve that avoids overfitting.
3. Additivity: The term "additive" indicates that the effect of each predictor on the response is considered separately, which simplifies the interpretation of the model.
4. Flexibility: GAMs can handle different types of response variables, such as continuous, binary, count, and even time-to-event data, making them versatile tools in statistical modeling.
Examples to Highlight Ideas:
- Ecological Data: In ecology, researchers might use GAMs to understand the relationship between environmental factors (like temperature and precipitation) and species distribution. For instance, a GAM could reveal that a certain plant species is most abundant at intermediate temperatures and low precipitation levels.
- Medical Research: In medical research, GAMs can be used to explore the effect of age and lifestyle factors on blood pressure. A GAM might show a non-linear increase in blood pressure with age and a sharp decrease associated with regular exercise.
GAMs serve as a bridge between traditional parametric models and more complex machine learning methods, offering a balance between interpretability and flexibility. They are particularly powerful in exploratory data analysis, where the goal is to uncover patterns and relationships within the data.
An Overview - Multinomial Logistic Regression: Multiple Choices: Multinomial Logistic Regression and GAMs
Integrating Generalized Additive Models (GAMs) with Multinomial Logistic Regression offers a powerful approach to modeling relationships where the response variable is categorical with more than two levels. This integration allows for the flexibility of GAMs in handling non-linear relationships while maintaining the interpretability of logistic regression. The essence of this integration lies in the ability to model complex, real-world scenarios where the influence of predictor variables on the outcomes is not strictly linear, which is often the case in fields such as ecology, medicine, and social sciences.
From a statistical perspective, the marriage of GAMs with multinomial logistic regression is a significant advancement. It enables the use of smooth functions to capture the intricate patterns in the data, which traditional logistic regression might miss. For instance, consider a medical study analyzing the risk factors for a disease with three stages. The relationship between a risk factor, such as age, and the probability of each disease stage might not be linear. Here, GAMs can be employed to model this non-linearity.
Insights from Different Perspectives:
1. Statistical Accuracy: By incorporating smooth functions, GAMs allow for a more nuanced understanding of the influence of predictors. This can lead to more accurate predictions, especially when dealing with complex datasets.
2. Interpretability: Despite the added complexity, the results can still be interpreted within the framework of logistic regression. Each predictor's effect is visualized through its smooth function, making it easier to communicate findings to non-experts.
3. Flexibility: GAMs provide the flexibility to model non-linear relationships without having to specify the exact form of the non-linearity, which is often unknown.
In-Depth Information:
1. Model Formulation: The model can be expressed as:
$$ P(Y=k) = \frac{e^{\eta_k}}{\sum_{j=1}^{K} e^{\eta_j}} $$
Where \( \eta_k \) is the linear predictor for the k-th category, which is a sum of smooth functions of the predictors.
2. Smooth Functions: These are typically splines or kernel functions that are added to the linear predictor of the logistic model. They are used to model the effect of continuous predictors on the log-odds of the outcomes.
3. Model Fitting: The fitting process involves selecting the appropriate smoothness for each function, which can be done using methods like cross-validation or generalized cross-validation.
Example to Highlight an Idea:
Imagine a marketing study aiming to understand consumer preferences among three brands of a product. Age and income might influence brand preference, but not in a simple linear way. A GAM integrated with multinomial logistic regression could reveal that younger consumers prefer Brand A, middle-aged consumers have no strong preference, and older consumers prefer Brand B, with income moderating these preferences at different levels.
Integrating GAMs with multinomial logistic regression is a robust method that captures the complexity of real-world data, providing deeper insights and more accurate predictions. It's a testament to the evolving nature of statistical modeling, adapting to the intricacies of the data it seeks to explain.
Integrating GAMs with Multinomial Logistic Regression - Multinomial Logistic Regression: Multiple Choices: Multinomial Logistic Regression and GAMs
Multinomial Logistic Regression (MLR) is a powerful statistical method that allows researchers and data analysts to understand and model relationships where the dependent variable is categorical with more than two levels. Unlike binary logistic regression, which deals with dichotomous outcomes, MLR is used when the outcome can take on three or more possible types—often referred to as choice modeling. It's particularly useful in fields like market research, health outcomes analysis, and social sciences, where decisions or states are not merely 'yes or no' but come with multiple options.
1. market Basket Analysis in retail:
A classic application of MLR is in market basket analysis, where retailers are interested in understanding the purchasing patterns of customers. By analyzing transaction data, MLR can predict the likelihood of a customer purchasing a combination of products. For example, if a customer buys bread and milk, what is the probability they will also buy butter, cheese, or jam? This insight helps in strategic product placement and targeted promotions.
2. Medical Diagnosis:
In healthcare, MLR assists in differential diagnosis by estimating the probabilities of various diseases given a set of symptoms and test results. For instance, if a patient presents with fever, headache, and a rash, MLR can help determine whether the patient is more likely to have dengue, Zika virus, or chikungunya, aiding clinicians in making informed treatment decisions.
3. Political Science - Voting Preferences:
Political scientists use MLR to analyze voting behavior. By considering factors such as age, income, education, and political issues, MLR models can predict the likelihood of a voter supporting different political parties or candidates. This type of analysis was pivotal in understanding the dynamics of the 2016 US presidential election, where voter preferences were not simply binary.
4. customer Satisfaction and Service quality:
Businesses often use MLR to understand the factors that influence customer satisfaction. By surveying customers and analyzing their responses, businesses can model the impact of various service attributes on overall satisfaction levels. For example, in the hospitality industry, MLR can discern how room quality, staff service, and amenities contribute to the likelihood of a customer being satisfied, neutral, or dissatisfied.
5. Transportation Mode Choice:
Transportation researchers employ MLR to study how individuals choose between different modes of transport, such as cars, buses, trains, or bicycles. Factors like travel time, cost, and convenience are used to predict the probability of a commuter choosing a particular mode over others, which is essential for urban planning and policy-making.
Through these case studies, it's evident that MLR is a versatile tool that can provide valuable insights across various domains. Its ability to handle multiple outcomes makes it indispensable for complex decision-making scenarios where the choices are not just 'this or that' but span a range of possibilities.
One becomes an entrepreneur to break the glass ceiling and that's when you grow the market. Of course, in that process you have to be prepared to get hurt. You will get hurt. But I'm a doer and I like taking risks.
As we delve deeper into the realm of multinomial analysis, it becomes increasingly clear that the traditional methods, while robust, offer a limited view of the intricate relationships within multivariate data. The advent of advanced techniques has opened up new avenues for researchers and analysts to explore, providing a richer, more nuanced understanding of the data at hand. These sophisticated methods not only enhance the accuracy of the models but also offer greater flexibility in handling complex, real-world scenarios where the outcomes are far from binary.
From the perspective of computational advancements, algorithms have become more efficient, enabling the processing of larger datasets with higher dimensionality. machine learning integration, for instance, has introduced a level of predictive power and automation previously unattainable. On the theoretical front, the development of new statistical theories has expanded the boundaries of what is possible with multinomial logistic regression and Generalized Additive Models (GAMs).
Let's explore some of these advanced techniques and future directions:
1. Regularization Methods: Techniques like LASSO (Least Absolute Shrinkage and Selection Operator) and Ridge regression have been adapted for multinomial logistic regression to prevent overfitting and enhance model generalization. For example, LASSO can be particularly useful in selecting significant predictors out of a large number of potential variables by shrinking the less important coefficients to zero.
2. Bayesian Approaches: Bayesian methods provide a probabilistic framework that can incorporate prior knowledge into the multinomial analysis. This is particularly beneficial when dealing with small datasets or when domain expertise can inform the model structure.
3. Ensemble Learning: Combining multiple models to improve predictions has shown promise in multinomial contexts. Techniques like Random Forests and Gradient Boosting Machines can handle categorical outcomes with multiple levels by aggregating the results from numerous decision trees.
4. artificial Neural networks: With their ability to model complex, non-linear relationships, neural networks have been applied to multinomial analysis, offering an alternative to traditional regression models. For instance, a neural network could be trained to classify images into multiple categories based on their features.
5. support Vector machines (SVMs): SVMs have been extended to handle multiple classes through approaches like one-vs-one and one-vs-all strategies. These methods are particularly useful in high-dimensional spaces where the data can be separated with hyperplanes.
6. Generalized Additive Models for Location, Scale, and Shape (GAMLSS): GAMLSS extend the flexibility of GAMs by allowing the distribution of the response variable to be modeled, not just the mean. This is particularly useful in situations where the variance or other distributional parameters change with the predictors.
7. Deep Learning: The rise of deep learning has brought about sophisticated architectures like convolutional Neural networks (CNNs) and recurrent Neural networks (RNNs), which can be applied to multinomial classification tasks with high-dimensional data, such as text and images.
8. Hybrid Models: Combining the strengths of different modeling approaches can lead to more robust predictions. For example, a hybrid model might use a GAM to capture non-linear effects and a neural network to model complex interactions.
In terms of future directions, the integration of causal inference in multinomial analysis is an exciting development. This approach seeks to not only predict outcomes but also to understand the underlying causal mechanisms. Additionally, the field is moving towards more interpretable models, as there is a growing need to explain the predictions made by complex algorithms, especially in sensitive areas like healthcare and finance.
To illustrate, consider a study examining the factors influencing the choice of transportation modes among commuters. A multinomial logistic regression model could be used to predict the probability of choosing car, bus, bike, or walking based on variables like distance, cost, and environmental concern. Advanced techniques like those mentioned above could refine the model's predictions and offer deeper insights into the commuters' decision-making processes.
The landscape of multinomial analysis is rapidly evolving, with advanced techniques enhancing our ability to glean insights from complex datasets. The future promises even more sophisticated tools, with the potential to revolutionize the way we analyze and interpret multivariate categorical data.
Advanced Techniques and Future Directions in Multinomial Analysis - Multinomial Logistic Regression: Multiple Choices: Multinomial Logistic Regression and GAMs
Read Other Blogs