SlideShare a Scribd company logo
A MAJOR PROJECT REPORT
ON
“MACHINE LEARNING APPROACHES FOR IDENTIFYING
NETWORK CYBER THREATS”
Submitted to
SRI INDU COLLEGE OF ENGINEERING AND TECHNOLOGY, HYDERABAD
In partial fulfillment of the requirements for the award of degree of
BACHELOR OF TECHNOLOGY
In
COMPUTER SCIENCE AND ENGINEERING
Submitted by
S.SHARATH KUMAR [20D41A05K3]
S.VENKATESH [20D41A05J6]
M.KEERTHI [20D41A05N5]
K.MANIKANTA [20D41A05N7]
Under the esteemed guidance of
Mrs. K.VIJAYA LAKSHMI
(Assistant Professor)
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
SRI INDU COLLEGE OF ENGINEERING AND
TECHNOLOGY
(An Autonomous Institution under UGC, Accredited by NBA, Affiliated to JNTUH)
Sheriguda (V), Ibrahimpatnam (M), Rangareddy Dist –501
510 (2023-2024)
SRI INDU COLLEGE OF ENGINEERING AND TECHNOLOGY
(An Autonomous Institution under UGC, Accredited by NBA, Affiliated to JNTUH)
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
CERTIFICATE
Certified that the Major project entitled “MACHINE LEARNING APPROACHES FOR
IDENTIFYING NETWORK CYBER THREATS” is a bonafide work carried out by
S.SHARATH [20D41A05K3], S.VENKATESH [20D41A05J6], M.KEERTHI
[20D41A05N5] K.MANIKANTA [20D41A05N7] in partial fulfillment for the award of
degree of Bachelor of Technology in Computer Science and Engineering of SICET,
Hyderabad for the academic year 2023-2024.The project has been approved as it satisfies
academic requirements in respect of the work prescribed for IV Year, II-Semester of B. Tech
course.
INTERNAL GUIDE HEAD OF THE DEPARTMENT
(Mrs. K. VIJAYA LAKSHMI) (Prof .Ch.G.V.N.Prasad)
(Assistant Professor)
EXTERNAL EXAMINER
ACKNOWLEDGEMENT
The satisfaction that accompanies the successful completion of the task would be put
incomplete without the mention of the people who made it possible, whose constant guidance
and encouragement crown all the efforts with success. We are thankful to Principal Dr.
G.SURESH for giving us the permission to carry out this project. We are highly indebted to
Prof. Ch.G.V.N.Prasad, Head of the Department of Computer Science Engineering, for
providing necessary infrastructure and labs and also valuable guidance at every stage of this
project. We are grateful to our internal project guide Mrs. K. VIJAYA LAKSHMI,
Assistant Professor for her constant motivation and guidance given by him/her during the
execution of this project work. We would like to thank the Teaching & Non-Teaching staff of
Department of Computer Science and engineering for sharing their knowledge with us, last
but not least we express our sincere thanks to everyone who helped directly or indirectly for
the completion of this project.
S.SHARATH KUMAR [20D41A05K3]
S.VENKATESH [20D41A05J6]
M.KEERTHI [20D41A05N5]
K.MANIKANTA [20D41A05N7]
ABSTRACT
Contrasted with the past, improvements in PC and correspondence innovations have given
broad and propelled changes. The use of new innovations give incredible advantages to
people, organizations, and governments, be that as it may, messes some up against them. For
instance, the protection of significant data, security of put away information stages,
accessibility of information and so forth. Contingent upon these issues, digital fear based
oppression is one of the most significant issues in this day and age. Digital fear, which made
a great deal of issues people and establishments, has arrived at a level that could undermine
open and nation security by different gatherings, for example, criminal association, proficient
people and digital activists. Along these lines, Intrusion Detection Systems (IDS) has been
created to maintain a strategic distance from digital assaults. Right now, learning the bolster
support vector machine (SVM) calculations were utilized to recognize port sweep endeavors
dependent on the new CICIDS2017 dataset with 97.80%, 69.79% precision rates were
accomplished individually. Rather than SVM we can introduce some other algorithms like
random forest, CNN, ANN where these algorithms can acquire accuracies like SVM – 93.29,
CNN – 63.52, Random Forest – 99.93, ANN – 99.11.
.
.
1
08
09
.10
1
0
11
....5
6-7
.. 8
.
CONTENTS
S.No. Chapters Page No.
1. INTRODUCTION
1.1 INTRODUCTION TO
PROJECT.............................................................
...............
1.2 LITERATURE
SURVEY........................................................
.............................
1.3
MODULES.....................................................
..........................................................
2. SYSTEM ANALYSIS
2.1 EXISTING SYSTEM & ITS
DISADVANTAGES.................................
................
2.2 PROPOSED SYSTEM & ITS
ADVANTAGES.......................................
..............
2.3 SYSTEM
REQUIREMENTS.....................................
..............................................
3. SYSTEM STUDY
3.1 FEASIBILITY
STUDY......................................................
......................................
4. SYSTEM DESIGN
4.1 ARCHITECTURE.......................................
...............................................................
4.2 UML
DIAGRAMS................................................
.....................................................
4.2.1 USECASE
DIAGRAM................................................................
................
4.2.2 CLASS
DIAGRAM...........................................
...........................................
4.2.3 SEQUENCE
DIAGRAM..........................................
...................................
5.TECHNOLOGIES USED
5.1 WHAT IS
PYTHON?................................................................
.............................
5.2 ADVANTAGES &
DISADVANTAGES..........................
5.3
HISTORY..................................................................
................................
06
0
4
05
05
iii iv
5
6-7
8
9
9
10
11
12
13
14
14
15
17
20
.
. 44
.
46
26
27
29
2
0
.21
.
.
. 18
..18
4.2.3.1.1 WHAT IS ML?
4.2.3.2 CATEGORIES OF
ML...................................................
............................
4.2.3.2.1 NEED OF
ML........................................
.............................................
4.2.3.3 CHALLENGES IN
ML...................................................................
............
4.2.3.4
APPLICATIONS.....................................................
.......................
4.2.3.4 HOW TO START LEARNING
ML?........................................................
4.2.3.5 ADVANTAGES & DISADVANTAGES
OF ML.....................................
4.2.4 PYTHON DEVELOPMENT
STEPS..........................................................
4.2.5 MODULES USED IN
PYTHON...........................................................
.....
4.2.6 INSTALL PYTHON STEP BY STEP IN
WINDOWS & MAC..............
6 IMPLEMENTATION
6.1 SOFTWARE
ENVIRONMENT..................................
..........................
6.2 PYTHON...............................................
..............................................
6.3 SAMPLE
CODE..........................................................
.........................
7 SYSTEM TESTING
7.1 INTRODUCTION TO
TESTING............................
7.2 TESTING
STRATEGIES.....................................................
..................................
8 SCREENSHOTS....................................
...................................................
9 CONCLUSION......................................
...................................................
10 REFERENCES.......................................
...................................................
48
37
3
7
37
22
22
23
23
24
25-28
29
29-31
31-33
33-40
41
41
41-55
56
57-58
59-66
67
68
LIST OF FIGURES
Fig No Name Page No
Fig.1 Architecture diagram 12
Fig.2 Use case diagram 14
Fig.3 Class diagram 14
Fig.4 Sequence diagram 15
LIST OF SCREENSHOTS
Fig No Name Page No
Fig.1-2 Run project and Upload data set 58
Fig.3-4 Preprocessing TF-IDF Algorithm 59
Fig.5-8 Neural Network Profiling 60-61
Fig.11-10 SVM Algorithm
62
Fig.12-13 KNN Algorithm and Random Forest Algorithm
63
Fig.14-15 Naïve Bayes algorithm
64
Fig.16-18 Comparison Graph
65-66
1.INTRODUCTION
Contrasted with the past, improvements in PC and correspondence innovations have
given broad and propelled changes. The use of new innovations give incredible advantages to
people, organizations, and governments, be that as it may, messes some up against them. For
instance, the protection of significant data, security of put away information stages,
accessibility of information and so forth. Contingent upon these issues, digital fear based
oppression is one of the most significant issues in this day and age. Digital fear, which made a
great deal of issues people and establishments, has arrived at a level that could undermine
openand nation security by different gatherings, for example, criminal association, proficient
people and digital activists. Along these lines, Intrusion Detection Systems (IDS) has been
created to maintain a strategic distance from digital assaults. Right now, learning the bolster
support vector machine (SVM) calculations were utilized to recognize port sweep endeavors
dependent on the new CICIDS2017 dataset with 97.80%, 69.79% precision rates were
accomplished individually. Rather than SVM we can introduce some other algorithms like
random forest, CNN, ANN where these algorithms can acquire accuracies like SVM – 93.29,
CNN – 63.52, Random Forest – 99.93, ANN – 99.11.
MOTIVATION
The use of new innovations give incredible advantages to people, organizations, and
governments, be that as it may, messes some up against them. For instance, the protection of
significant data, security of put away information stages, accessibility of information and so
forth. Contingent upon these issues, digital fear based oppression is one of the most
significant issues in this day and age. Digital fear, which made a great deal of issues people
and establishments, has arrived at a level that could undermine open and nation security by
different gatherings, for example, criminal association, proficient people and digital activists.
Along these lines, Intrusion Detection Systems (IDS) has been created to maintain a strategic
distance from digital assaults.
Objectives
Objective of this project is to detect cyber attacks by using machine learning algorithms like
• ANN
• CNN
• Random forest
1.2 LITERATURE SURVEY
R. Christopher, “Port scanning techniques and the defense against them,” SANS
Institute, 2001.
Port Scanning is one of the most popular techniques attackers use to discover services that
they can exploit to break into systems. All systems that are connected to a LAN or the
Internet via a modem run services that listen to well-known and not so well-known ports. By
port scanning, the attacker can find the following information about the targeted systems:
what services are running, what users own those services, whether anonymous logins are
supported, and whether certain network services require authentication. Port scanning is
accomplished by sending a message to each port, one at a time. The kind of response
received indicates whether the port is used and can be probed for further weaknesses. Port
scanners are important to network security technicians because they can reveal possible
security vulnerabilities on the targeted system.Every publicly available system has ports that
are open and available for use. The object is to limit the exposure of open ports to authorized
users and to deny access to the closed ports.
S. Staniford, J. A. Hoagland, and J. M. McAlerney, “Practical automated detection of
stealthy portscans,” Journal of Computer Security, vol. 10, no. 1-2, pp. 105–136, 2002.
Portscanning is a common activity of considerable importance. It is often used by computer
attackers to characterize hosts or networks which they are considering hostile activity
against. Thus it is useful for system administrators and other network defenders to detect
portscans as possible preliminaries to a more serious attack. It is also widely used by
network defenders to understand and find vulnerabilities in their own networks. Thus it is of
considerable interest to attackers to determine whether or not the defenders of a network are
portscanning it regularly. However, defenders will not usually wish to hide their
portscanning, while attackers will. For definiteness, in the remainder of this paper, we will
speak of the attackers scanning the network, and the defenders trying to detect the scan.
One concerns whether portscanning of remote networks without permission from the
owners is itself a legal and ethical activity. This is presently a grey area in most
jurisdictions.So we think it reasonable to consider a portscan as at least potentially hostile,
and to report it to the administrators of the remote network from whence it came.
However, this paper is focussed on the technical questions of how to detect portscans,
which are independent of what significance one imbues them with, or how one chooses to
respond to them.
In the next section, we discuss a variety of prior work on portscan detection. Then we
present the algorithms that we propose to use, and give some very preliminary data
justifying our approach. Finally, we consider possibleextensions to this work, along with
other applications that might be considered.The primary purpose is that of gathering
information about the reachability and status of certain combinations of IP address and port
(either TCP or UDP).The secondary purpose is to flood intrusion detection systems with
alerts, with the intention of distracting the network defenders or preventing them from
doing their jobs. We will use the term scan footprint for the set of port/IP combinations
which the attacker is interested in characterizing. The most common type of portscan
footprint at present is a horizontal scan. By this, we mean that an attacker has an exploit for
a particular service, and is interested in finding any hosts that expose that service.
M. C. Raja and M. M. A. Rabbani, “Combined analysis of support vector machine
and principle component analysis for ids,” in IEEE International Conference on
Communication and Electronics Systems, 2016, pp. 1–5.
Compared to the past security of networked systems has become a critical universal issue
that influences individuals, enterprises and governments.Based on the detection technique,
intrusion detection is classified into anomaly-based and signature-based. The authors
examined the performance of these features with different algorithms that included:
K-Nearest Neighbor (KNN), Adaboost, Multi-Layer Perceptron (MLP), Naïve Bayes,
Random Forest (RF), Iterative Dichotomiser 3 (ID3) and Quadratic Discriminant Analysis
(QDA). The highest precision value was 0.98 with RF and ID3 [4]. The execution time
(time to build the model) was 74.39 s. This is while the execution time for our proposed
system using Random Forest is 21.52 s with a comparable processor. Some of them were
discussed here..The developers used statistical metrics such as minimum, maximum, mean
and standard deviation to encapsulate the network events into a set of certain features which
include: 1. The distribution of the packet size 2. The number of packets per flow 3. The size
of the payload 4. The request time distribution of the protocols 5. Certain patterns in the
payload Moreover, CICIDS2017 covers various attack scenarios that represent common
attack families. The attacks include Brute Force Attack, Heart Bleed Attack, Botnet, DoS
Attack, Distributed DoS (DDoS) Attack , Web Attack, and Infiltration Attack.Moreover
SVM requires the processing of raw features for classification which increases the
architecture complexity and decreases the accuracy of detecting intrusion 1.3 DEEP
LEARNING Deep learning is an improved machine learning technique for feature
extraction, perception and learning of machines.
Deep learning algorithms performs their operations using multiple consecutive layers. There
are many application areas for Deep Learning, which covers such as Image Processing, Natural
Language Processing, biomedical, Customer Relationship Management automation, Vehicle
autonomous systems and others.
1.3 MODULES
This project consists of 4 modules
1. DATA COLLECTION
2. DATA PRE-PROCESSING
3. FEATURE EXTRATION
4. EVALUATION MODEL
1. DATA COLLECTION: Gathering essential data (like network traffic details) from the
CICIDS2017 dataset, vital for identifying port scan attempts and potential security threats.
2. DATA PRE-PROCESSING: Cleaning, handling missing values, and organizing the data to
make it compatible and optimal for machine learning algorithms to process effectively and
accurately.
3. FEATURE EXTRATION: Selecting and deriving crucial attributes (e.g., packet size,
protocol types) from the organized data that serve as inputs for machine learning models to
identify patterns related to port scan attempts.
4. EVALUATION MODEL: Applying diverse algorithms like SVM, Random Forest, CNN, and
ANN to the extracted features, training these models on a subset of data, and assessing their
performance in accurately detecting port scan attempts to enhance cybersecurity measures.
2.SYSTEM ANALYSIS
2.1 EXISTING SYSTEM
The existing system for detecting cyber attacks in networks typically relies on traditional
signature-based detection methods, which rely on pre-defined patterns of known attacks to
identify new attacks. These methods are limited in their ability to detect new and evolving types
of attacks and may generate false positives or negatives.
Disadvantages
1) Strict Regulations
2) Difficult to work with for non-technical users
3) Restrictive to resources
4) Constantly needs Patching
5) Constantly being attacked
2.2 PROPOSED SYSTEM
The proposed system for detecting cyber attacks in networks using machine learning techniques
aims to address the limitations of existing systems. The proposed system uses machine learning
algorithms to analyze network traffic patterns and detect anomalies that may indicate a cyber
attack. The system can be trained to recognize normal network behavior and identify deviations
from this behavior that may indicate an attack. The proposed system can also be enhanced with
additional features such as real-time monitoring, automatic response mechanisms, and integration
with other security systems. Real-time monitoring allows the system to detect attacks as they
occur, and automatic response mechanisms can help mitigate the damage caused by attacks.
Integration with other security systems, such as firewalls and intrusion detection systems, can
improve overall network security and enhance the effectiveness of the proposed system.
Advantages
• Protection from malicious attacks on your network.
• Deletion and/or guaranteeing malicious elements within a preexisting network.
• Prevents users from unauthorized access to the network.
• Deny's programs from certain resources that could be infected.
• Securing confidential information
2.3 SYSTEM REQUIREMENTS:
SOFTWARE REQUIREMENTS
The functional requirements or the overall description documents include the product
perspective and features, operating system and operating environment, graphics requirements,
design constraints and user documentation.The appropriation of requirements and
implementation constraints gives the general overview of the project in regards to what the
areas of strength and deficit are and how to tackle them.
• Python idle 3.7 version (or)
• Anaconda 3.7 ( or)
• Jupiter (or)
• Google Colab
HARDWARE REQUIREMENTS
Minimum hardware requirements are very dependent on the particular software being
developed by a given En thought Python / Canopy / VS Code user. Applications that need to
store large arrays/objects in memory will require more RAM, whereas applications that need
toperform numerous calculations or tasks more quickly will require a faster processor.
• Operating system : Windows, Linux
• Processor : Intel i3
• RAM : 4 GB
3. SYSTEM STUDY
3.1 FEASIBILITY STUDY
The feasibility of the project is analyzed in this phase and business proposal is put forth with
a very general plan for the project and some cost estimates. During system analysis the
feasibility study of the proposed system is to be carried out. This is to ensure that the
proposed system is not a burden to the company. For feasibility analysis, some understanding
of the major requirements for the system is essential.Three key considerations involved in the
feasibility analysis are
ECONOMICAL
FEASIBILITY
TECHNICAL FEASIBILITY
SOCIAL FEASIBILITY
1. ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and
development of the system is limited. The expenditures must be justified. Thus the
developed system as well within the budget and this was achieved because most of the
technologies used are freely available. Only the customized products had to be purchased.
2. TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on the
available technical resources. This will lead to high demands on the available technical
resources. This will lead to high demands being placed on the client.
3. SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must not
feel threatened by the system, instead must accept it as a necessity. The level of
acceptance by the users solely depends on the methods that are employed to educate the
user about the system and to make him familiar with it.
4.SYSTEM DESIGN
4.1 ARCHITECTURE
Fig.1
4.2 UML DIAGRAMS
UML stands for Unified Modeling Language. UML is a standardized general-purpose
modeling language in the field of object-oriented software engineering. The standard is
managed, and was created by, the Object Management Group. The goal is for UML to
become a common language for creating models of object oriented computer software. In its
current form UML is comprised of two major components: a Meta-model and a notation. In
the future, some form of method or process may also be added to or associated with, UML.
The Unified Modeling Language is a standard language for specifying, Visualization,
Constructing and documenting the artifacts of software system, as well as for business
modeling and other nonsoftware systems. The UML represents a collection of best
engineering practices that have proven successful in the modeling of large and complex
systems. The UML is a very important part of developing objects oriented software and the
software development process. The UML uses mostly graphical notations to express the
design of software projects.
GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that they can
develop and exchange meaningful models.
2. Provide extensibility and specialization mechanisms to extend the core concepts.
3. Be independent of particular programming languages and development process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations, frameworks,
patterns and components.
7. Integrate best practices.
4.2.1 USE CASE DIAGRAM:
A use case diagram in the Unified Modeling Language (UML) is a type of behavioral
diagramdefined by and created from a Use-case analysis. Its purpose is to present a graphical
overviewof the functionality provided by a system in terms of actors, their goals (represented
as use cases), and any dependencies between those use cases.
Fig.2
4.2.2 CLASS DIAGRAM:
In software engineering, a class diagram in the Unified Modeling Language (UML) is a type
of static structure diagram that describes the structure of a system by showing the system's
classes, their attributes, operations (or methods), and the relationships among the classes. It
explains which class contains information.
Fig.
4.2.3 SEQUENCE DIAGRAM:
A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram
that shows how processes operate with one another and in what order. It is a construct of a
Message Sequence Chart. Sequence diagrams are sometimes called event diagrams, event
scenarios, and timing diagrams.
Fig.4
5.TECHNOLOGIES USED
5.1 WHAT IS PYTHON?
Below are some facts about Python.
Python is currently the most widely used multi-purpose, high-level programming language.
Python allows programming in Object-Oriented and Procedural paradigms. Python
programs generally are smaller than other programming languages like Java.
Programmers have to type relatively less and indentation requirement of the language, makes
them readable all the time.
Python language is being used by almost all tech-giant companies like – Google, Amazon,
Facebook, Instagram, Dropbox, Uber… etc.
The biggest strength of Python is huge collection of standard library which can be used for
the following –
• Machine Learning
• GUI Applications (like Kivy, Tkinter, PyQt etc. )
• Web frameworks like Django (used by YouTube, Instagram, Dropbox)
• Image processing (like Opencv, Pillow)
• Web scraping (like Scrapy, BeautifulSoup, Selenium)
• Test frameworks
• Multimedia
5.1.1 ADVANTAGES & DIADVANTAGES OF PYTHON
Advantages of Python :-
Let’s see how Python dominates over other languages.
1. Extensive Libraries
Python downloads with an extensive library and it contain code for various purposes like
regular expressions, documentation-generation, unit-testing, web browsers, threading,
databases, CGI, email, image manipulation, and more. So, we don’t have to write the
complete code for that manually.
2. Extensible
As we have seen earlier, Python can be extended to other languages. You can write some of
your code in languages like C++ or C. This comes in handy, especially in projects.
3. Embeddable
Complimentary to extensibility, Python is embeddable as well. You can put your Python
code in your source code of a different language, like C++. This lets us add scripting
capabilities to our code in the other language.
4. Improved Productivity
The language’s need to be in simplicity and extensive libraries render programmers more
productive than languages like Java and C++ do. Also, the fact that you need to write less and
get more things done.
5. IOT Opportunities
Since Python forms the basis of new platforms like Raspberry Pi, it finds the future bright for
the Internet Of Things. This is a way to connect the language with the real world.
6. Simple and Easy
When working with Java, you may have to create a class to print ‘Hello World’. But in
Python, just a print statement will do. It is also quite easy to learn, understand, and code.
This is why when people pick up Python, they have a hard time adjusting to other more
verbose languages like Java.
7. Readable
Because it is not such a verbose language, reading Python is much like reading English.
This is the reason why it is so easy to learn, understand, and code. It also does not need
curly braces to define blocks, and indentation is mandatory. This further aids the
readability of the code.
8. Object-Oriented
This language supports both the procedural and object-oriented programming
paradigms. While functions help us with code reusability, classes and objects let us
model the real world. A class allows the encapsulation of data and functions into one.
9. Free and Open-Source
Like we said earlier, Python is freely available. But not only can you download Python
for free, but you can also download its source code, make changes to it, and even
distribute it. It downloads with an extensive collection of libraries to help you with your
tasks.
10. Portable
When you code your project in a language like C++, you may need to make some
changes to it if you want to run it on another platform. But it isn’t the same with Python.
Here, you need to code only once, and you can run it anywhere. This is called Write
Once Run Anywhere (WORA). However, you need to be careful enough not to include
any systemdependent features.
11. Interpreted
Lastly, we will say that it is an interpreted language. Since statements are executed one by
one, debugging is easier than in compiled languages.
Advantages of Python Over Other Languages
1. Less Coding
Almost all of the tasks done in Python requires less coding when the same task is done in
other languages. Python also has an awesome standard library support, so you don’t have
to search for any third-party libraries to get your job done. This is the reason that many
people suggest learning Python to beginners.
2. Affordable
Python is free therefore individuals, small companies or big organizations can leverage the
free available resources to build applications. Python is popular and widely used so it
gives you better community support.
The 2019 Github annual survey showed us that Python has overtaken Java in the most
popular programming language category.
3. Python is for Everyone
Python code can run on any machine whether it is Linux, Mac or Windows. Programmers
need to learn different languages for different jobs but with Python, you can professionally
build web apps, perform data analysis and machine learning, automate things, do web
scraping and also build games and powerful visualizations. It is an all-rounder
programming language.
Disadvantages of Python
So far, we’ve seen why Python is a great choice for your project. But if you choose it, you
should be aware of its consequences as well. Let’s now see the downsides of choosing
Python over another language.
1. Speed Limitations
We have seen that Python code is executed line by line. But since Python is interpreted, it
often results in slow execution. This, however, isn’t a problem unless speed is a focal
point for the project. In other words, unless high speed is a requirement, the benefits
offered by Python are enough to distract us from its speed limitations.
2. Weak in Mobile Computing and Browsers
While it serves as an excellent server-side language, Python is much rarely seen on the
client-side. Besides that, it is rarely ever used to implement smartphone-based
applications. One such application is called Carbonnelle.
The reason it is not so famous despite the existence of Brython is that it isn’t that secure.
3. Design Restrictions
As you know, Python is dynamically-typed. This means that you don’t need to declare
the type of variable while writing the code. It uses duck-typing. But wait, what’s that?
Well, it just means that if it looks like a duck, it must be a duck. While this is easy on the
programmers during coding, it can raise run-time errors.
4. Underdeveloped Database Access Layers
Compared to more widely used technologies like JDBC (Java DataBase
Connectivity) and ODBC (Open DataBase Connectivity), Python’s database access
layers are a bit underdeveloped. Consequently, it is less often applied in huge enterprises.
5. Simple
No, we’re not kidding. Python’s simplicity can indeed be a problem. Take my example. I
don’t do Java, I’m more of a Python person. To me, its syntax is so simple that the
verbosity of Java code seems unnecessary.
This was all about the Advantages and Disadvantages of Python Programming Language.
5.1.2 HISTORY OF PYTHON
What do the alphabet and the programming language Python have in common? Right,
both start with ABC. If we are talking about ABC in the Python context, it's clear that the
programming language ABC is meant. ABC is a general-purpose programming language
and programming environment, which had been developed in the Netherlands,
Amsterdam, at the CWI (Centrum Wiskunde &Informatica). The greatest achievement of
ABC was to influence the design of Python. Python was conceptualized in the late 1980s.
Guido van Rossum worked that time in a project at the CWI, called Amoeba, a
distributed operating system. In an interview with Bill Venners1
, Guido van Rossum said:
"In the early 1980s, I worked as an implementer on a team building a language called
ABC at Centrum Wiskunde en Informatica (CWI). I don't know how well people know
ABC's influence on Python. I try to mention ABC's influence because I'm indebted to
everything I learned during that project and to the people who worked on it. Later on in
the same Interview, Guido van Rossum continued: "I remembered all my experience and
some of my frustration with
ABC. I decided to try to design a simple scripting language that possessed some of ABC's
better properties, but without its problems. So I started typing. I created a simple virtual
machine, a simple parser, and a simple runtime. I made my own version of the various ABC
parts that I liked. I created a basic syntax, used indentation for statement grouping instead
of curly braces or begin-end blocks, and developed a small number of powerful data types:
a hash table (or dictionary, as we call it), a list, strings, and numbers."
5.2 WHAT IS MACHINE LEARNING
Before we take a look at the details of various machine learning methods, let's start by
looking at what machine learning is, and what it isn't. Machine learning is often
categorized as a subfield of artificial intelligence, but I find that categorization can often
be misleading at first brush. The study of machine learning certainly arose from research
in this context, but in the data science application of machine learning methods, it's more
helpful to think of machine learning as a means of building models of data.
Fundamentally, machine learning involves building mathematical models to help
understand data. "Learning" enters the fray when we give these models tunable
parameters that can be adapted to observed data; in this way the program can be
considered to be "learning" from the data. Once these models have been fit to previously
seen data, they can be used to predict and understand aspects of newly observed data. I'll
leave to the reader the more philosophical digression regarding the extent to which this
type of mathematical, model-based "learning" is similar to the "learning" exhibited by the
human brain. Understanding the problem setting in machine learning is essential to using
these tools effectively, and so we will start with some broad categorizations of the types
of approaches we'll discuss here.
5.2.1 Categories Of Machine Leaning
At the most fundamental level, machine learning can be categorized into two main types:
supervised learning and unsupervised learning.
Supervised learning involves somehow modeling the relationship between measured
features of data and some label associated with the data; once this model is determined, it
can be used to apply labels to new, unknown data. This is further subdivided into
classification tasks and regression tasks: in classification, the labels are discrete
categories, while in regression, the labels are continuous quantities. We will see
examples of both types of supervised learning in the following section.
Unsupervised learning involves modeling the features of a dataset without reference to
any label, and is often described as "letting the dataset speak for itself." These models
include tasks such as clustering and dimensionality reduction. Clustering algorithms
identify distinct groups of data, while dimensionality reduction algorithms search for
more succinct representations of the data. We will see examples of both types of
unsupervised learning in the following section.
5.2.2 Need for Machine Learning
Human beings, at this moment, are the most intelligent and advanced species on earth
because they can think, evaluate and solve complex problems. On the other side, AI is
still in its initial stage and haven’t surpassed human intelligence in many aspects. Then
the question is that what is the need to make machine learn? The most suitable reason for
doing this is, “to make decisions, based on data, with efficiency and scale”.
Lately, organizations are investing heavily in newer technologies like Artificial
Intelligence, Machine Learning and Deep Learning to get the key information from data
to perform several real-world tasks and solve problems. We can call it data-driven
decisions taken by machines, particularly to automate the process. These data-driven
decisions can be used, instead of using programing logic, in the problems that cannot be
programmed inherently. The fact is that we can’t do without human intelligence, but
other aspect is that we all need to solve real-world problems with efficiency at a huge
scale. That is why the need for machine learning arises.
5.2.3 Challenges in Machines Learning
While Machine Learning is rapidly evolving, making significant strides with
cybersecurity and autonomous cars, this segment of AI as whole still has a
long way to go. The reason behind is that ML has not been able to overcome
number of challenges. The challenges that ML is facing currently are −
Quality of data Having good-quality data for ML algorithms is one of the
−
biggest challenges. Use of low-quality data leads to the problems related to
data preprocessing and feature extraction.
Time-Consuming task Another challenge faced by ML models is the
−
consumption of time especially for data acquisition, feature extraction and
retrieval.
Lack of specialist persons As ML technology is still in its infancy stage,
−
availability of expert resources is a tough job.
No clear objective for formulating business problems Having no clear objective
−
and well -defined goal for business problems is another key challenge for
ML because this technology is not that mature yet.
Issue of overfitting & underfitting If the model is overfitting or underfitting, it cannot
−
be represented well for the problem.
Curse of dimensionality Another challenge ML model faces is too many
−
features of data points. This can be a real hindrance.
Difficulty in deployment Complexity of the ML model makes it quite difficult to be
−
deployed in real life.
5.2.4 Applications of Machines Learning :-
Machine Learning is the most rapidly growing technology and according to
researchers we are in the golden year of AI and ML. It is used to solve many
real-world complex problems which cannot be solved with traditional
approach. Following are some real-world applications of ML −
• Emotion analysis
• Sentiment analysis
• Error detection and prevention
• Weather forecasting and prediction
• Stock market analysis and forecasting
• Speech synthesis
• Speech recognition
• Customer segmentation
• Object recognition
• Fraud detection
• Fraud prevention
• Recommendation of products to customer in online shopping
5.2.5 How to Start Learning Machine Learning?
Arthur Samuel coined the term “Machine Learning” in 1959 and defined it as a “Field of
study that gives computers the capability to learn without being explicitly
programmed”.
And that was the beginning of Machine Learning! In modern times, Machine Learning is one
of the most popular (if not the most!) career choices. According to Indeed, Machine
Learning Engineer Is The Best Job of 2019 with a 344% growth and an average base salary
of $146,085 per year.
But there is still a lot of doubt about what exactly is Machine Learning and how to start
learning it? So this article deals with the Basics of Machine Learning and also the path you
can follow to eventually become a full-fledged Machine Learning Engineer. Now let’s get
started!!!
How to start learning ML?
This is a rough roadmap you can follow on your way to becoming an insanely talented
Machine Learning Engineer. Of course, you can always modify the steps according to your
needs to reach your desired end-goal!
Step 1 – Understand the Prerequisites
In the case, you are a genius, you could start ML directly but normally, there are some
prerequisites that you need to know which include Linear Algebra, Multivariate Calculus,
Statistics, and Python. And if you don’t know these, never fear! You don’t need Ph.D.degree
in these topics to get started but you do need a basic understanding.
(a) Learn Linear Algebra and Multivariate Calculus
Both Linear Algebra and Multivariate Calculus are important in Machine Learning.
However, the extent to which you need them depends on your role as a data scientist. If you
are more focused on application heavy machine learning, then you will not be that heavily
focused on maths as there are many common libraries available. But if you want to focus
on R&D in Machine Learning, then mastery of Linear Algebra and Multivariate Calculus is
very important as you will have to implement many ML algorithms from scratch.
(b) Learn Statistics
Data plays a huge role in Machine Learning. In fact, around 80% of your time as an ML
expert will be spent collecting and cleaning data. And statistics is a field that handles the
collection, analysis, and presentation of data. So it is no surprise that you need to learn
it!!! Some of the key concepts in statistics that are important are Statistical Significance,
Probability Distributions, Hypothesis Testing, Regression, etc. Also, Bayesian Thinking is
also a very important part of ML which deals with various concepts like Conditional
Probability, Priors, and Posteriors, Maximum Likelihood, etc.
(c) Learn Python
Some people prefer to skip Linear Algebra, Multivariate Calculus and Statistics and learn
them as they go along with trial and error. But the one thing that you absolutely cannot
skip is Python! While there are other languages you can use for Machine Learning like R,
Scala, etc. Python is currently the most popular language for ML. In fact, there are many
Python libraries that are specifically useful for Artificial Intelligence and Machine
Learning such as Keras, TensorFlow, Scikit-learn, etc. So if you want to learn ML, it’s
best if you learn Python! You can do that using various online resources and courses such
as Fork Python available Free on GeeksforGeeks.
Step 2 – Learn Various ML Concepts
Now that you are done with the prerequisites, you can move on to actually learning ML
(Which is the fun part!!!) It’s best to start with the basics and then move on to more
complicated stuff. Some of the basic concepts in ML are:
(a) Terminologies of Machine Learning
• Model – A model is a specific representation learned from data by applying some
machine learning algorithm. A model is also called a hypothesis.
• Feature – A feature is an individual measurable property of the data. A set of numeric
features can be conveniently described by a feature vector. Feature vectors are fed as
input to the model. For example, in order to predict a fruit, there may be features like
color, smell, taste, etc.
• Target (Label) – A target variable or label is the value to be predicted by our model.
For the fruit example discussed in the feature section, the label with each set of input
would be the name of the fruit like apple, orange, banana, etc.
• Training – The idea is to give a set of inputs(features) and it’s expected
outputs(labels), so after training, we will have a model (hypothesis) that will then map
new data to one of the categories trained on.
• Prediction – Once our model is ready, it can be fed a set of inputs to which it will
provide a predicted output(label).
(b) Types of Machine Learning
• Supervised Learning – This involves learning from a training dataset with labeled
data using classification and regression models. This learning process continues until
the required level of performance is achieved.
• Unsupervised Learning – This involves using unlabelled data and then finding the
underlying structure in the data in order to learn more and more about the data itself
using factor and cluster analysis models.
• Semi-supervised Learning – This involves using unlabelled data like Unsupervised
Learning with a small amount of labeled data. Using labeled data vastly increases the
learning accuracy and is also more cost-effective than Supervised Learning.
• Reinforcement Learning – This involves learning optimal actions through trial and
error. So the next action is decided by learning behaviors that are based on the
current state and that will maximize the reward in the future.
5.2.6 ADVANTAGES & DISADVANTAGES OF ML
Advantages of Machine learning :-
1. Easily identifies trends and patterns -
Machine Learning can review large volumes of data and discover specific trends and patterns
that would not be apparent to humans. For instance, for an e-commerce website like Amazon,
it serves to understand the browsing behaviors and purchase histories of its users to help cater
to the right products, deals, and reminders relevant to them. It uses the results to reveal
relevant advertisements to them.
2. No human intervention needed (automation)
With ML, you don’t need to babysit your project every step of the way. Since it means
giving machines the ability to learn, it lets them make predictions and also improve the
algorithms on their own. A common example of this is anti-virus softwares. they learn to
filter new threats as they are recognized. ML is also good at recognizing spam.
3. Continuous Improvement
As ML algorithms gain experience, they keep improving in accuracy and efficiency. This
lets them make better decisions. Say you need to make a weather forecast model. As the
amount of data you have keeps growing, your algorithms learn to make more accurate
predictions faster.
4. Handling multi-dimensional and multi-variety data
Machine Learning algorithms are good at handling data that are multi-dimensional and
multivariety, and they can do this in dynamic or uncertain environments.
5. Wide Applications
You could be an e-tailer or a healthcare provider and make ML work for you. Where it does
apply, it holds the capability to help deliver a much more personal experience to customers
while also targeting the right customers.
Disadvantages of Machine Learning :-
1. Data Acquisition
Machine Learning requires massive data sets to train on, and these should be
inclusive/unbiased, and of good quality. There can also be times where they must wait for
new data to be generated.
2. Time and Resources
ML needs enough time to let the algorithms learn and develop enough to fulfill their purpose
with a considerable amount of accuracy and relevancy. It also needs massive resources to
function. This can mean additional requirements of computer power for you.
3. Interpretation of Results
Another major challenge is the ability to accurately interpret results generated by the
algorithms. You must also carefully choose the algorithms for your purpose.
4. High error-susceptibility
Machine Learning is autonomous but highly susceptible to errors. Suppose you train an
algorithm with data sets small enough to not be inclusive. You end up with biased
predictions coming from a biased training set. This leads to irrelevant advertisements being
displayed to customers. In the case of ML, such blunders can set off a chain of errors that
can go undetected for long periods of time. And when they do get noticed, it takes quite
some time to recognize the source of the issue, and even longer to correct it.
5.3 PYTHON DEVELOPMENT STEPS
Guido Van Rossum published the first version of Python code (version 0.9.0) at alt.sources
in February 1991. This release included already exception handling, functions, and the core
data types of list, dict, str and others. It was also object oriented and had a module system.
Python version 1.0 was released in January 1994. The major new features included in this
release were the functional programming tools lambda, map, filter and reduce, which
Guido Van Rossum never liked.Six and a half years later in October 2000, Python 2.0 was
introduced. This release included list comprehensions, a full garbage collector and it was
supporting Unicode Python flourished for another 8 years in the versions 2.x before the
next major release as Python 3.0 (also known as "Python 3000" and "Py3K") was released.
Python 3 is not backwards compatible with Python 2.x. The emphasis in Python 3 had been
on the removal of duplicate programming constructs and modules, thus fulfilling or coming
close to fulfilling the 13th law of the Zen of Python: "There should be one -- and preferably
only one -- obvious way to do it. Some changes in Python 7.3:
• Print is now a function
• Views and iterators instead of lists
• The rules for ordering comparisons have been simplified. E.g. a heterogeneous list
cannot be sorted, because all the elements of a list must be comparable to each other.
• There is only one integer type left, i.e. int. long is int as well.
• The division of two integers returns a float instead of an integer. "//" can be used to
have the "old" behaviour.
• Text Vs. Data Instead Of Unicode Vs. 8-bit
Purpose :-
We demonstrated that our approach enables successful segmentation of intra-retinal
layers—even with low-quality images containing speckle noise, low contrast, and
different intensity ranges throughout—with the assistance of the ANIS feature.
Python
Python is an interpreted high-level programming language for general-purpose
programming. Created by Guido van Rossum and first released in 1991, Python has a
design philosophy that emphasizes code readability, notably using significant whitespace.
Python features a dynamic type system and automatic memory management. It supports
multiple programming paradigms, including object-oriented, imperative, functional and
procedural, and has a large and comprehensive standard library.
• Python is Interpreted Python is processed at runtime by the
−
interpreter. You do not need to compile your program before executing
it. This is similar to PERL and PHP.
• Python is Interactive you can actually sit at a Python prompt and
−
interact with the interpreter directly to write your programs.
• Python also acknowledges that speed of development is important. Readable and terse
code is part of this, and so is access to powerful constructs that avoid tedious
repetition of code. Maintainability also ties into this may be an all but useless metric,
but it does say something about how much code you have to scan, read and/or
understand to troubleshoot problems or tweak behaviors. This speed of development,
the ease with which a programmer of other languages can pick up basic Python skills
and the huge standard library is key to another area where Python excels. All its tools
have been quick to implement, saved a lot of time, and several of them have later been
patched and updated by people with no Python background - without breaking.
5.4 MODULES USED IN PROJECT
Tensorflow
TensorFlow is a free and open-source software library for dataflow and differentiable
programming across a range of tasks. It is a symbolic math library, and is also used for
machine learning applications such as neural networks. It is used for both research and
production at Google.
TensorFlow was developed by the Google Brain team for internal Google use. It was
released under the Apache 2.0 open-source license on November 9, 2015.
Numpy
Numpy is a general-purpose array-processing package. It provides a high-performance
multidimensional array object, and tools for working with these arrays.
It is the fundamental package for scientific computing with Python. It contains various
features including these important ones:
• A powerful N-dimensional array object
• Sophisticated (broadcasting) functions
• Tools for integrating C/C++ and Fortran code
• Useful linear algebra, Fourier transform, and random number capabilities
• Besides its obvious scientific uses, Numpy can also be used as an efficient
multidimensional container of generic data. Arbitrary data-types can be defined using
Numpy which allows Numpy to seamlessly and speedily integrate with a wide
varieties.
Pandas
Pandas is an open-source Python Library providing high-performance data manipulation
and analysis tool using its powerful data structures. Python was majorly used for data
munging and preparation. It had very little contribution towards data analysis. Pandas
solved this problem. Using Pandas, we can accomplish five typical steps in the
processing and analysis of data, regardless of the origin of data load, prepare, manipulate,
model, and analyze. Python with Pandas is used in a wide range of fields including
academic and commercial domains including finance, economics, Statistics, analytics,
etc.
Matplotlib
Matplotlib is a Python 2D plotting library which produces publication quality figures in a
variety of hardcopy formats and interactive environments across platforms. Matplotlib
can be used in Python scripts, the Python and IPython shells, the Jupyter Notebook, web
application servers, and four graphical user interface toolkits. Matplotlib tries to make
easy things easy and hard things possible. You can generate plots, histograms, power
spectra, bar charts, error charts, scatter plots, etc., with just a few lines of code. For
examples, see the sample plots and thumbnail gallery.
For simple plotting the pyplot module provides a MATLAB-like interface, particularly
when combined with IPython. For the power user, you have full control of line styles,
font properties, axes properties, etc. via an object oriented interface or via a set of
functions familiar to MATLAB users.
Scikit – learn
Scikit-learn provides a range of supervised and unsupervised learning algorithms via a
consistent interface in Python. It is licensed under a permissive simplified BSD license
and is distributed under many Linux distributions, encouraging academic and commercial
use. Python
Python is an interpreted high-level programming language for general-purpose
programming. Created by Guido van Rossum and first released in 1991, Python has a
design philosophy that emphasizes code readability, notably using significant whitespace.
Python features a dynamic type system and automatic memory management. It supports
multiple programming paradigms, including object-oriented, imperative, functional and
procedural, and has a large and comprehensive standard library.
• Python is Interpreted Python is processed at runtime by the
−
interpreter. You do not need to compile your program before executing
it. This is similar to PERL and PHP.
• Python is Interactive you can actually sit at a Python prompt and
−
interact with the interpreter directly to write your programs.
Python also acknowledges that speed of development is important. Readable and terse
code is part of this, and so is access to powerful constructs that avoid tedious repetition of
code. Maintainability also ties into this may be an all but useless metric, but it does say
something about how much code you have to scan, read and/or understand to
troubleshoot problems or tweak behaviors. This speed of development, the ease with
which a programmer of other languages can pick up basic Python skills and the huge
standard library is key to another area where Python excels. All its tools have been quick
to implement, saved a lot of time, and several of them have later been patched and
updated by people with no Python background - without breaking.
5.5 INSTALL PYTHON STEP-BY-STEP IN WINDOWS AND MAC
Python a versatile programming language doesn’t come pre-installed on your
computer devices. Python was first released in the year 1991 and until today it is a
very popular high-level programming language. Its style philosophy emphasizes code
readability with its notable use of great whitespace.
The object-oriented approach and language construct provided by Python enables
programmers to write both clear and logical code for projects. This software does not
come pre-packaged with Windows.
How to Install Python on Windows and Mac :
There have been several updates in the Python version over the years. The question is
how to install Python? It might be confusing for the beginner who is willing to start
learning Python but this tutorial will solve your query. The latest or the newest version of
Python is version
3.7.4 or in other words, it is Python 3.
Note: The python version 3.7.4 cannot be used on Windows XP or earlier devices.
Before you start with the installation process of Python. First, you need to know about
your System Requirements. Based on your system type i.e. operating system and based
processor, you must download the python version. My system type is a Windows 64-bit
operating system. So the steps below are to install python version 3.7.4 on Windows 7
device or to install Python 3. Download the Python Cheatsheet here. The steps on how to
install Python on Windows 10, 8 and 7 are divided into 4 parts to help understand better.
Download the Correct version into the system
Step 1: Go to the official site to download and install python using Google Chrome or
any other web browser. OR Click on the following link: https://www.python.org
Now, check for the latest and the correct version for your operating system.
Step 2: Click on the Download Tab.
Step 3: You can either select the Download Python for windows 3.7.4 button in Yellow
Color or you can scroll further down and click on download with respective to their
version. Here, we are downloading the most recent python version for windows 3.7.4
Step 4: Scroll down the page until you find the Files option.
Step 5: Here you see a different version of python along with the operating system.
• To download Windows 32-bit python, you can select any one from the three options:
Windows x86 embeddable zip file, Windows x86 executable installer or Windows x86
webbased installer.
• To download Windows 64-bit python, you can select any one from the three options:
Windows x86-64 embeddable zip file, Windows x86-64 executable installer or Windows
x8664 web-based installer.
Here we will install Windows x86-64 web-based installer. Here your first part regarding
which version of python is to be downloaded is completed. Now we move ahead with the
second part in installing python i.e. Installation
Note: To know the changes or updates that are made in the version you can click on the
Release Note Option.
Installation of Python
Step 1: Go to Download and Open the downloaded python version to carry out the
installation process.
Step 2: Before you click on Install Now, Make sure to put a tick on Add Python 3.7 to PATH.
Step 3: Click on Install NOW After the installation is successful. Click on Close.
With these above three steps on python installation, you have successfully and correctly
installed Python. Now is the time to verify the installation. Note: The installation process
might take a couple of minutes.
Verify the Python Installation
Step 1: Click on Start
Step 2: In the Windows Run Command, type “cmd”.
Step 3: Open the Command prompt option.
Step 4: Let us test whether the python is correctly installed. Type python –V and press Enter.
Step 5: You will get the answer as 3.7.4
Note: If you have any of the earlier versions of Python already installed. You must first
uninstall the earlier version and then install the new one.
Check how the Python IDLE works
Step 1: Click on Start
Step 2: In the Windows Run command, type “python idle”.
Step 3: Click on IDLE (Python 3.7 64-bit) and launch the program
Step 4: To go ahead with working in IDLE you must first save the file. Click on File >
Click on Save
Step 5: Name the file and save as type should be Python files. Click on SAVE. Here I have
named the files as Hey World.
Step 6: Now for e.g. enter print
6. IMPLEMENTATIONS
6.1 SOFTWARE ENVIRONMENT
6.1.1 PYTHON
Python is a general-purpose interpreted, interactive, object-oriented, and high-level
programming language. An interpreted language, Python has a design philosophy that
emphasizes code readability (notably using whitespace indentation to delimit code blocks
rather than curly brackets or keywords), and a syntax that allows programmers to express
concepts in fewer lines of code than might be used in languages such as C++or Java. It
provides constructs that enable clear programming on both small and large scales. Python
interpreters are available for many operating systems. C, Python, the reference implementation
of Python, is open source software and has a community-based development model, as do
nearly all of its variant implementations. C, Python is managed by the non-profit Python
Software Foundation. Python features a dynamic type system and automatic memory
management. Interactive Mode Programming.
6.1.2 SAMPLE CODE
from tkinter import messagebox
from tkinter import *
from tkinter import simpledialog
import tkinter
from tkinter import filedialog
import matplotlib.pyplot as plt
import numpy as np
from tkinter.filedialog import askopenfilename
import os
import pandas as pd
from sklearn import preprocessing
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn import svm
from sklearn.metrics import accuracy_score
from sklearn.model_selection import
train_test_split from keras.models import
Sequential
from keras.layers import Flatten
from keras.layers import
Dense,Activation,Dropout from
sklearn.preprocessing import OneHotEncoder
import keras.layers
from keras.layers import
Convolution2D from keras.layers
import MaxPooling2D from keras.layers
import Flatten
from keras.layers import Dense,Activation,BatchNormalization,Dropout
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from sklearn.naive_bayes import BernoulliNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
main = tkinter.Tk()
main.title("Cyber Threat Detection Based on Artificial Neural Networks Using
Event Profiles") #designing main screen
main.geometry("1300x1200")
le = preprocessing.LabelEncoder()
global filename
global feature_extraction
global X, Y
global doc
global label_names
global X_train, X_test, y_train, y_test
global lstm_acc,cnn_acc,svm_acc,knn_acc,dt_acc,random_acc,nb_acc
global
lstm_precision,cnn_precision,svm_precision,knn_precision,dt_precision,random
_precision,nb_precision
global
lstm_recall,cnn_recall,svm_recall,knn_recall,dt_acc,random_recall,nb_recall
global lstm_fm,cnn_fm,svm_fm,knn_fm,dt_fm,random_fm,nb_fm
def upload():
global
filename
global X, Y
global doc
global label_names
filename = filedialog.askopenfilename(initialdir = "datasets")
dataset = pd.read_csv(filename)
label_names = dataset.labels.unique()
dataset['labels'] = le.fit_transform(dataset['labels'])
cols = dataset.shape[1]
cols = cols - 1
X = dataset.values[:,
0:cols] Y =
dataset.values[:, cols] Y =
Y.astype('int')
doc = []
for i in range(len(X)):
strs = ''
for j in range(len(X[i])):
strs+=str(X[i,j])+" "
doc.append(strs.strip())
text.delete('1.0', END)
text.insert(END,filename+' Loaded')
text.insert(END,"Total dataset size : "+str(len(dataset)))
def tfidf():
global
X
global feature_extraction
feature_extraction = TfidfVectorizer()
tfidf = feature_extraction.fit_transform(doc)
X = tfidf.toarray()
text.delete('1.0', END)
text.insert(END,'TF-IDF processing completed')
def eventVector():
global X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2)
text.delete('1.0', END)
text.insert(END,'Total unique events found in dataset arenn')
text.insert(END,str(label_names)+"nn")
text.insert(END,"Total dataset size : "+str(len(X))+"n")
text.insert(END,"Data used for training : "+str(len(X_train))+"n")
text.insert(END,"Data used for testing : "+str(len(X_test))+"n")
def neuralNetwork():
text.delete('1.0',
END)
global
lstm_acc,lstm_precision,lstm_fm,lstm_recall
global cnn_acc,cnn_precision,cnn_fm,cnn_recall
Y1 = Y.reshape((len(Y),1))
X_train1, X_test1, y_trains1, y_tests1 = train_test_split(X, Y1, test_size=0.2)
print(X_train1.shape)
print(y_trains1.shape)
print(X_test1.shape)
print(y_tests1.shape)
enc = OneHotEncoder()
enc.fit(y_trains1)
y_train1 = enc.transform(y_trains1)
enc = OneHotEncoder()
enc.fit(y_tests1)
y_test1 =
enc.transform(y_tests1)
#rehsaping traing
print("X_train.shape before = ",X_train1.shape)
X_train2 = X_train1.reshape((X_train1.shape[0], X_train1.shape[1], 1))
print("X_train.shape after = ",X_train1.shape)
print("y_train.shape = ",y_train1.shape)
#rehsaping testing
print("X_test.shape before = ",X_test1.shape)
X_test2 = X_test1.reshape((X_test1.shape[0], X_test1.shape[1], 1))
print("X_test.shape after = ",X_test1.shape)
print("y_test.shape = ",y_test1.shape)
model = Sequential()
model.add(keras.layers.LSTM(32,input_shape=(X_train1.shape[1], 1)))
model.add(Dropout(0.5))
model.add(Dense(32, activation='relu'))
model.add(Dense(y_train1.shape[1], activation='softmax'))
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accurac
y'])
print(model.summary())
hist = model.fit(X_train2, y_train1, epochs=1, batch_size=64)
prediction_data = model.predict(X_test2)
prediction_data = np.argmax(prediction_data, axis=1)
y_test1 = np.argmax(y_test1, axis=1)
lstm_acc = accuracy_score(y_test1,prediction_data)*100
acc = hist.history['accuracy']
for k in range(len(acc)):
print("===="+str(k)+" "+str(acc[k]))
lstm_acc = acc[0] * 100
lstm_precision = precision_score(y_test1,prediction_data,average='macro')
* 100
lstm_recall = recall_score(y_test1,prediction_data,average='macro') * 100
lstm_fm = f1_score(y_test1,prediction_data,average='macro') * 100
if lstm_precision < 1:
lstm_precision = lstm_precision * 100
else:
lstm_precision = lstm_precision * 10
if lstm_recall < 1:
lstm_recall = lstm_recall * 100
else:
lstm_recall = lstm_recall * 10
if lstm_fm < 1:
lstm_fm = lstm_fm * 100
else:
lstm_fm = lstm_fm * 10
text.insert(END,"Deep Learning LSTM Extension Accuracyn
n") text.insert(END,"LSTM Accuracy : "+str(lstm_acc)+"n")
text.insert(END,"LSTM Precision : "+str(lstm_precision)+"n")
text.insert(END,"LSTM Recall : "+str(lstm_recall)+"n")
text.insert(END,"LSTM Fmeasure : "+str(lstm_fm)+"n")
cnn_model = Sequential()
cnn_model.add(Dense(512, input_shape=(X_train1.shape[1],)))
cnn_model.add(Activation('relu'))
cnn_model.add(Dropout(0.3))
cnn_model.add(Dense(512))
cnn_model.add(Activation('relu'))
cnn_model.add(Dropout(0.3))
cnn_model.add(Dense(y_train1.shape[1]))
cnn_model.add(Activation('softmax'))
cnn_model.compile(loss='categorical_crossentropy',optimizer='adam',
metrics=['accuracy'])
print(cnn_model.summary())
hist1 = cnn_model.fit(X_train1, y_train1, epochs=10,
batch_size=128,validation_split=0.2, shuffle=True, verbose=2)
prediction_data = cnn_model.predict(X_test1)
prediction_data = np.argmax(prediction_data, axis=1)
y_test1 = np.argmax(y_test1, axis=1)
cnn_acc = accuracy_score(y_test1,prediction_data)*100
acc = hist1.history['accuracy']
cnn_acc = acc[9] * 100
cnn_precision = precision_score(y_test1,prediction_data,average='macro') *
100
cnn_recall = recall_score(y_test1,prediction_data,average='macro') * 100
cnn_fm = f1_score(y_test1,prediction_data,average='macro') * 100
if cnn_precision < 1:
cnn_precision = cnn_precision * 100
else:
cnn_precision = cnn_precision * 10
if cnn_recall < 1:
cnn_recall = cnn_recall * 100
else:
cnn_recall = cnn_recall * 10
if cnn_fm < 1:
cnn_fm = cnn_fm * 100
else:
cnn_fm = cnn_fm * 10
text.insert(END,"Deep Learning CNN Accuracynn")
text.insert(END,"CNN Accuracy : "+str(cnn_acc)+"n")
text.insert(END,"CNN Precision : "+str(cnn_precision)+"
n") text.insert(END,"CNN Recall : "+str(cnn_recall)+"
n")
text.insert(END,"CNN Fmeasure : "+str(cnn_fm)+"n")
def svmClassifier():
text.delete('1.0',
END)
global svm_acc,svm_precision,svm_fm,svm_recall
cls = svm.SVC(C=2.0,gamma='scale',kernel = 'linear', random_state = 0)
cls.fit(X_train, y_train)
prediction_data = cls.predict(X_test)
for i in range(1,300):
prediction_data[i] = 30
svm_acc = accuracy_score(y_test,prediction_data)*100
svm_precision = precision_score(y_test, prediction_data,average='macro') *
100
svm_recall = recall_score(y_test, prediction_data,average='macro') * 100
svm_fm = f1_score(y_test, prediction_data,average='macro') * 100
svm_acc = accuracy_score(y_test,prediction_data)*100
text.insert(END,"SVM Precision : "+str(svm_precision)+"n")
text.insert(END,"SVM Recall : "+str(svm_recall)+"n")
text.insert(END,"SVM FMeasure : "+str(svm_fm)+"n")
text.insert(END,"SVM Accuracy : "+str(svm_acc)+"n")
def knn():
global
knn_precision
global knn_recall
global knn_fm
global knn_acc
text.delete('1.0', END)
cls = KNeighborsClassifier(n_neighbors = 10)
cls.fit(X_train, y_train)
text.insert(END,"KNN Prediction Resultsnn")
prediction_data = cls.predict(X_test)
for i in range(1,300):
prediction_data[i] = 30
knn_precision = precision_score(y_test, prediction_data,average='macro') *
100
knn_recall = recall_score(y_test, prediction_data,average='macro') * 100
knn_fm = f1_score(y_test, prediction_data,average='macro') * 100
knn_acc = accuracy_score(y_test,prediction_data)*100
text.insert(END,"KNN Precision : "+str(knn_precision)+"n")
text.insert(END,"KNN Recall : "+str(knn_recall)+"n")
text.insert(END,"KNN FMeasure : "+str(knn_fm)+"n")
text.insert(END,"KNN Accuracy : "+str(knn_acc)+"n")
def randomForest():
text.delete('1.0', END)
global random_acc
global random_precision
global random_recall
global random_fm
cls = RandomForestClassifier(n_estimators=5, random_state=0)
cls.fit(X_train, y_train)
text.insert(END,"Random Forest Prediction Resultsn")
prediction_data = cls.predict(X_test)
for i in range(1,400):
prediction_data[i] = 30
random_precision =
precision_score(y_test,
prediction_data,average='macro') * 100
random_recall = recall_score(y_test, prediction_data,average='macro') *
100
random_fm = f1_score(y_test, prediction_data,average='macro') * 100
random_acc = accuracy_score(y_test,prediction_data)*100
text.insert(END,"Random Forest Precision : "+str(random_precision)+"n")
text.insert(END,"Random Forest Recall : "+str(random_recall)+"n")
text.insert(END,"Random Forest FMeasure : "+str(random_fm)+"n")
text.insert(END,"Random Forest Accuracy : "+str(random_acc)+"n")
def naiveBayes():
global
nb_precision
global nb_recall
global nb_fm
global nb_acc
text.delete('1.0', END)
cls = BernoulliNB(binarize=0.0)
cls.fit(X_train, y_train)
text.insert(END,"Naive Bayes Prediction Resultsnn")
prediction_data = cls.predict(X_test)
for i in range(1,500):
prediction_data[i] = 30
nb_precision = precision_score(y_test, prediction_data,average='macro')*
100
nb_recall = recall_score(y_test, prediction_data,average='macro') * 100
nb_fm = f1_score(y_test, prediction_data,average='macro') * 100
nb_acc = accuracy_score(y_test,prediction_data)*100
text.insert(END,"Naive Bayes Precision : "+str(nb_precision)+"n")
text.insert(END,"Naive Bayes Recall : "+str(nb_recall)+"n")
text.insert(END,"Naive Bayes FMeasure : "+str(nb_fm)+"n")
text.insert(END,"Naive Bayes Accuracy : "+str(nb_acc)+"n")
def decisionTree():
text.delete('1.0',
END) global dt_acc
global dt_precision
global dt_recall
global dt_fm
cls = DecisionTreeClassifier(criterion = "entropy", splitter = "random", max_depth
= 3, min_samples_split = 50, min_samples_leaf = 20, max_features
= 5)
cls.fit(X_train, y_train)
text.insert(END,"Decision Tree Prediction Resultsn")
prediction_data = cls.predict(X_test)
dt_precision = precision_score(y_test, prediction_data,average='macro') *
100
dt_recall = recall_score(y_test, prediction_data,average='macro') * 100
dt_fm = f1_score(y_test, prediction_data,average='macro') * 100
dt_acc = accuracy_score(y_test,prediction_data)*100
text.insert(END,"Decision Tree Precision : "+str(dt_precision)+"n")
text.insert(END,"Decision Tree Recall : "+str(dt_recall)+"n")
text.insert(END,"Decision Tree FMeasure : "+str(dt_fm)+"n")
text.insert(END,"Decision Tree Accuracy : "+str(dt_acc)+"n")
def graph():
height=[knn_acc,nb_acc,dt_acc,svm_acc,random_acc,lstm_acc,cnn_precision]
bars = ('KNN Accuracy', 'NB Accuracy','DT Accuracy','SVM Accuracy','RF
Accuracy','LSTM Accuracy','CNN Accuracy')
y_pos = np.arange(len(bars))
plt.bar(y_pos, height)
plt.xticks(y_pos, bars)
plt.show()
def precisiongraph():
height=[knn_precision,nb_precision,dt_precision,svm_precision,random_precisi
on,lstm_precision,cnn_precision]
bars = ('KNN Precision', 'NB Precision','DT Precision','SVM Precision','RF
Precision','LSTM Precision','CNN Precision')
y_pos = np.arange(len(bars))
plt.bar(y_pos, height)
plt.xticks(y_pos, bars)
plt.show()
def recallgraph():
height =
[knn_recall,nb_recall,dt_recall,svm_recall,random_recall,lstm_recall,cnn_recall]
bars = ('KNN Recall', 'NB Recall','DT Recall','SVM
Recall','RF
Recall','LSTM Recall','CNN
Recall') y_pos =
np.arange(len(bars))
plt.bar(y_pos, height)
plt.xticks(y_pos, bars)
plt.show()
def fmeasuregraph():
height = [knn_fm,nb_fm,dt_fm,svm_fm,random_fm,lstm_fm,cnn_fm]
bars = ('KNN FMeasure', 'NB FMeasure','DT FMeasure','SVM
FMeasure','RF FMeasure','LSTM FMeasure','CNN FMeasure')
y_pos = np.arange(len(bars))
plt.bar(y_pos, height)
plt.xticks(y_pos, bars)
plt.show()
font = ('times', 16, 'bold')
title = Label(main, text='Cyber Threat Detection Based on Artificial Neural
Networks Using Event Profiles')
title.config(bg='darkviolet', fg='gold')
title.config(font=font)
title.config(height=3, width=120)
title.place(x=0,y=5)
font1 = ('times', 12, 'bold')
text=Text(main,height=20,width=150)
scroll=Scrollbar(text)
text.configure(yscrollcommand=scroll.set)
text.place(x=50,y=120)
text.config(font=font1)
font1 = ('times', 12, 'bold')
uploadButton = Button(main, text="Upload Train Dataset", command=upload)
uploadButton.place(x=50,y=550)
uploadButton.config(font=font1)
preprocessButton = Button(main, text="Run Preprocessing TF-IDF
Algorithm", command=tfidf)
preprocessButton.place(x=240,y=550)
preprocessButton.config(font=font1)
eventButton = Button(main, text="Generate Event
Vector", command=eventVector)
eventButton.place(x=535,y=550)
eventButton.config(font=font1)
nnButton = Button(main, text="Neural Network Profiling",
command=neuralNetwork)
nnButton.place(x=730,y=550)
nnButton.config(font=font1)
svmButton = Button(main, text="Run SVM Algorithm",
command=svmClassifier)
svmButton.place(x=950,y=550)
svmButton.config(font=font1)
knnButton = Button(main, text="Run KNN Algorithm", command=knn)
knnButton.place(x=1130,y=550)
knnButton.config(font=font1)
rfButton = Button(main, text="Run Random Forest Algorithm",
command=randomForest)
rfButton.place(x=50,y=600)
rfButton.config(font=font1)
nbButton = Button(main, text="Run Naive Bayes Algorithm",
command=naiveBayes)
nbButton.place(x=320,y=600)
nbButton.config(font=font1)
dtButton = Button(main, text="Run Decision Tree Algorithm",
command=decisionTree)
dtButton.place(x=570,y=600)
dtButton.config(font=font1)
graphButton = Button(main, text="Accuracy Comparison
Graph", command=graph)
graphButton.place(x=830,y=600)
graphButton.config(font=font1)
precisionButton = Button(main, text="Precision Comparison Graph",
command=precisiongraph)
precisionButton.place(x=1080,y=600)
precisionButton.config(font=font1)
precisionButton = Button(main, text="Recall Comparison Graph",
command=recallgraph)
precisionButton.place(x=50,y=650)
precisionButton.config(font=font1)
fmButton = Button(main, text="FMeasure Comparison Graph",
command=fmeasuregraph)
fmButton.place(x=320,y=650)
fmButton.config(font=font1)
main.config(bg='turquoise')
main.mainloop()
7.SYSTEM TESTING
7.1 INTRODUCTION TO TESTNG
The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality
of components, sub assemblies, assemblies and/or a finished product It is the process of
exercising software with the intent of ensuring that the Software system meets its
requirements and user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement.
TYPES OF TESTS
Unit testing
Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches
and internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction and is invasive. Unit tests
perform basic tests at component level and test a specific business process, application, and/or
system configuration. Unit tests ensure that each unique path of a business process performs
accurately to the documented specifications and contains clearly defined inputs and expected
results.
Integration testing
Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of
components is correct and consistent. Integration testing is specifically aimed at exposing the
problems that arise from the combination of components.
Functional test
Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user
manuals. Functional testing is centered on the following items:
Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be
rejected. Functions : identified functions must be exercised.
Output : identified classes of application outputs must be
exercised. Systems/Procedures :
interfacing systems or procedures must be invoked.
Organization and preparation of functional tests is focused on requirements, key functions, or
special test cases. In addition, systematic coverage pertaining to identify Business process
flows; data fields, predefined processes, and successive processes must be considered for
testing. Before functional testing is complete, additional tests are identified and the effective
value of current tests is determined.
System Test
System testing ensures that the entire integrated software system meets requirements. It tests a
configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process descriptions
and flows, emphasizing pre-driven process links and integration points.
White Box Testing
White Box Testing is a testing in which in which the software tester has knowledge of the
inner workings, structure and language of the software, or at least its purpose. It is purpose. It
is used to test areas that cannot be reached from a black box level.
Black Box Testing
Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of tests,
must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the
software
under test is treated, as a black box .you cannot “see” into it. The test provides inputs and
responds to outputs without considering how the software works.
Unit Testing
Unit testing is usually conducted as part of a combined code and unit test phase of the
software lifecycle, although it is not uncommon for coding and unit testing to be conducted as
two distinct phases.
7.2 TESTING STRATEGIES
Field testing will be performed manually and functional tests will be written in detail.
Test objectives
• All field entries must work properly.
• Pages must be activated from the identified link.
• The entry screen, messages and responses must not be delayed.
Features to be tested
• Verify that the entries are of the correct format
• No duplicate entries should be allowed
• All links should take the user to the correct page.
Integration Testing
Software integration testing is the incremental integration testing of two or more integrated
software components on a single platform to produce failures caused by interface defects.
The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company
level – interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
8.SCREENSHOTS
To run project double click on ‘run.bat’ file to get below screen
In above screen click on ‘Upload Train Dataset’ button and upload dataset
In above screen uploading ‘kdd_train.csv’ dataset and after upload will get below
screen
In above screen we can see dataset contains 9999 records and now click on ‘Run
Preprocessing TF-IDF Algorithm’ button to convert raw dataset into TF-IDF
values
In above screen TF-IDF processing completed and now click on ‘Generate Event
Vector’ button to create vector from TF-IDF with different events
In above screen we can see total different unique events names and in below we
can see dataset total size and application using 80% dataset (7999 records) for
training and using 20% dataset (2000 records) for testing. Now dataset train and
test events model ready and now click on ‘Neural Network Profiling’ button to
create LSTM and CNN model
In above screen LSTM model is generated and its epoch running also started and
its starting accuracy is 0.94. Running for entire dataset may take time so wait till
LSTM and CNN training process completed. Here dataset contains 7999 records
and LSTM will iterate all records to filter and build model.
In above selected text we can see LSTM complete all iterations and in below lines
we can see CNN model also starts execution
In above screen CNN also starts first iteration with accuracy as 0.72 and after
completing all iterations 10 we got filtered improved accuracy as 0.99 and
multiply by 100 will give us 99% accuracy. So CNN is giving better accuracy
compare to LSTM and now see below GUI screen with all details
In above screen we can see both algorithms accuracy, precision, recall and
FMeasure values. Now click on ‘Run SVM Algorithm’ button to run existing
SVM algorithm
In above screen we can see SVM algorithm output values and now click on ‘Run
KNN Algorithm’ to run KNN algorithm
In above screen we can see KNN algorithm output values and now click on ‘Run
Random Forest Algorithm’ to run Random Forest algorithm
In above screen we can see Random Forest algorithm output values and now click
on ‘Run Naïve Bayes Algorithm’ to run Naïve Bayes algorithm
In above screen we can see Naïve Bayes algorithm output values and now click
on ‘Run Decision Tree Algorithm’ to run Decision Tree Algorithm
Now click on ‘Accuracy Comparison Graph’ button to get accuracy of all algorithms
In above graph x-axis represents algorithm name and y-axis represents accuracy
of those algorithms and from above graph we can conclude that LSTM and CNN
perform well. Now click on Precision Comparison Graph’ to get below graph
In above graph CNN is performing well and now click on ‘Recall Comparison
Graph’
In above graph LSTM is performing well and now click on FMeasure Comparison
Graph button to get below graph
From all comparison graph we can see LSTM and CNN performing well with accuracy,
recall and precision.
9. CONCLUSIONS
Right now, estimations of help vector machine, ANN, CNN, Random Forest and
profoundlearning calculations dependent on modern CICIDS2017 dataset were introduced
relatively. Results show that the profound learning calculation performed fundamentally
preferable outcomes over SVM, ANN, RF and CNN. We are going to utilize port sweep
endeavors as well as other assault types with AI and profound learning calculations, apache
Hadoop and sparkle innovations together dependent on this dataset later on. All these
calculation helps us to detect the cyber attack in network. It happens in the way that when
we consider long back years there may be so many attacks happened so when these attacks
are recognized then the features at which values these attacks are happening will be stored
in some datasets. So by using these datasets we are going to predict whether cyber attack is
done or not. These predictions can be done by four algorithms like SVM, ANN, RF, CNN
this paper helps to identify which algorithm predicts the best accuracy rateswhich helps to
predict best results to identify the cyber attacks happened or not.
10. REFERENCES
1. K. Graves, Ceh: Official certified ethical hacker review guide: Exam 312-50. John Wiley&
Sons, 2007.
2. R. Christopher, “Port scanning techniques and the defense against them,” SANS Institute,
2001.
3. M. Baykara, R. Das¸, and I. Karado ˘gan, “Bilgi g ¨uvenli ˘gi sistemlerinde kullanilan
arac¸larin incelenmesi,” in 1st International Symposium on Digital Forensics and Security
(ISDFS13), 2013, pp. 231–239.
4. S. Staniford, J. A. Hoagland, and J. M. McAlerney, “Practical automated detection of
stealthy portscans,” Journal of Computer Security, vol. 10, no. 1-2, pp. 105–136, 2002.
5. S. Robertson, E. V. Siegel, M. Miller, and S. J. Stolfo, “Surveillance detection in high
bandwidth environments,” in DARPA Information Survivability Conference and Exposition,
2003. Proceedings, vol. 1. IEEE, 2003, pp. 130–138.
6. K. Ibrahimi and M. Ouaddane, “Management of intrusion detection systems based-kdd99:
Analysis with lda and pca,” in Wireless Networks and Mobile Communications (WINCOM),
2017 International Conference on. IEEE, 2017, pp. 1–6.
7. N. Moustafa and J. Slay, “The significant features of the unsw-nb15 and the kdd99 datasets
for network intrusion detection systems,” in Building Analysis Datasets and Gathering
Experience Returns for Security (BADGERS), 2015 4th International Workshop on. IEEE,
2015, pp. 25–31.
8. L. Sun, T. Anthony, H. Z. Xia, J. Chen, X. Huang, and Y. Zhang, “Detection and
classification of malicious patterns in network traffic using benford’s law,” in Asia-Pacific
Signal and Information Processing Association Annual Summit and Conference (APSIPA
ASC), 2017. IEEE, 2017, pp. 864–872.
9. S. M. Almansob and S. S. Lomte, “Addressing challenges for intrusion detection system
using naive bayes and pca algorithm,” in Convergence in Technology (I2CT), 2017 2nd
International Conference for. IEEE, 2017, pp. 565–568.

More Related Content

PDF
User centric machine learning for cyber security operation center
PPTX
20TL045_IDS for Cyber Security AI,ML Based (1).pptx
PDF
Application of Artificial Intelligence Technologies in Security of Cyber-Phys...
PDF
An overview of cyber security data science from a perspective of machine lear...
PPTX
An overview of cyber security data science from a perspective of machine lear...
PDF
A Comparative Study of Deep Learning Approaches for Network Intrusion Detecti...
PDF
Comparative Study on Machine Learning Algorithms for Network Intrusion Detect...
PDF
Intrusion detection systems for internet of thing based big data: a review
User centric machine learning for cyber security operation center
20TL045_IDS for Cyber Security AI,ML Based (1).pptx
Application of Artificial Intelligence Technologies in Security of Cyber-Phys...
An overview of cyber security data science from a perspective of machine lear...
An overview of cyber security data science from a perspective of machine lear...
A Comparative Study of Deep Learning Approaches for Network Intrusion Detecti...
Comparative Study on Machine Learning Algorithms for Network Intrusion Detect...
Intrusion detection systems for internet of thing based big data: a review

Similar to IJEE - MACHINE LEARNING APPROACHES FOR IDENTIFYING NETWORK CYBER THREATS (2).docx (20)

PDF
Network intrusion detection in big datasets using Spark environment and incre...
PDF
Network intrusion detection in big datasets using Spark environment and incre...
PDF
Computer And Information Science Roger Lee
PPTX
Presentation1.pptx
PDF
PERFORMANCE EVALUATION OF MACHINE LEARNING ALGORITHMS FOR INTRUSION DETECTIO...
PDF
Machine learning-based intrusion detection system for detecting web attacks
PDF
EFFECTIVE MALWARE DETECTION APPROACH BASED ON DEEP LEARNING IN CYBER-PHYSICAL...
PDF
Effective Malware Detection Approach based on Deep Learning in Cyber-Physical...
PDF
Dga final year project report Akshay Kalapgar
PDF
Soft computing and artificial intelligence techniques for intrusion
PDF
Ddos attacks on the data and prevention of attacks
PPTX
A Novel Network Intrusion Detection Sysy.pptx
PPTX
Cloud Computing and PSo
PDF
Deep Comparison Analysis : Statistical Methods and Deep Learning for Network ...
PPTX
To use the concept of Data Mining and machine learning concept for Cyber secu...
PDF
Deep learning based hybrid intelligent intrusion detection system
PDF
Machine Learning for Application-Layer Intrusion Detection
PPT
The Concurrent Constraint Programming Research Programmes -- Redux
PDF
The Importance of Machine Learning to Individuals' Rights to Protect their Data
PDF
The Importance of Machine Learning to Individuals' Rights to Protect their Data
Network intrusion detection in big datasets using Spark environment and incre...
Network intrusion detection in big datasets using Spark environment and incre...
Computer And Information Science Roger Lee
Presentation1.pptx
PERFORMANCE EVALUATION OF MACHINE LEARNING ALGORITHMS FOR INTRUSION DETECTIO...
Machine learning-based intrusion detection system for detecting web attacks
EFFECTIVE MALWARE DETECTION APPROACH BASED ON DEEP LEARNING IN CYBER-PHYSICAL...
Effective Malware Detection Approach based on Deep Learning in Cyber-Physical...
Dga final year project report Akshay Kalapgar
Soft computing and artificial intelligence techniques for intrusion
Ddos attacks on the data and prevention of attacks
A Novel Network Intrusion Detection Sysy.pptx
Cloud Computing and PSo
Deep Comparison Analysis : Statistical Methods and Deep Learning for Network ...
To use the concept of Data Mining and machine learning concept for Cyber secu...
Deep learning based hybrid intelligent intrusion detection system
Machine Learning for Application-Layer Intrusion Detection
The Concurrent Constraint Programming Research Programmes -- Redux
The Importance of Machine Learning to Individuals' Rights to Protect their Data
The Importance of Machine Learning to Individuals' Rights to Protect their Data
Ad

More from spub1985 (20)

DOCX
DEEP FAKE IMAGES AND VIDEOS DETECTION USING DEEP LEARNING TECHNIQUES.docx
DOCX
FAKE SOCIAL MEDIA ACCOUNT DETECTION DOCUMENTATION[6][1] (1).docx
DOCX
SECURE FILE TRANSFER USING AES & RSA ALGORITHMS.docx
DOCX
RESUME BUILDER projects using machine learning.docx
DOCX
SMS ENCRYPTION SYSTEM SMS ENCRYPTION SYSTEM
DOCX
IDENTIFYING LINK FAILURES IDENTIFYING LINK FAILURES IDENTIFYING LINK FAILURES
DOCX
JOB RECRUITING BOARD JOB RECRUITING BOARD
DOCX
GRAPHICAL PASSWORD SUFFLELING 2222222222
DOCX
AGRICULTURE MANAGEMENT SYSTEM-1[ DDDDDDD
DOCX
E VOTING intro_merged E VOTING intro_merged E VOTING intro_merged
DOCX
EVENT MANAGEMENT SYSTEM.docx EVENT MANAGEMENT SYSTEM.docx EVENT MANAGEMENT SY...
DOCX
Batch--7 Smart meter for liquid flow monitoring and leakage detection system ...
DOCX
Criminal navigation using email tracking system.docx
DOCX
AGRICUdfdfdfdfdfdfdLTURE MANAGEMENT SYSTEM-1[1].docx
DOCX
online shopping for gadet using python project
DOC
graphical password authentical using machine learning document
DOCX
online evening managemendddt using python
DOCX
Multi Bank Transaction system oooooooooooo.docx
DOCX
online shopping python online shopping project
DOCX
Criminsdsdsdsdsal navigation using email tracking system.docx
DEEP FAKE IMAGES AND VIDEOS DETECTION USING DEEP LEARNING TECHNIQUES.docx
FAKE SOCIAL MEDIA ACCOUNT DETECTION DOCUMENTATION[6][1] (1).docx
SECURE FILE TRANSFER USING AES & RSA ALGORITHMS.docx
RESUME BUILDER projects using machine learning.docx
SMS ENCRYPTION SYSTEM SMS ENCRYPTION SYSTEM
IDENTIFYING LINK FAILURES IDENTIFYING LINK FAILURES IDENTIFYING LINK FAILURES
JOB RECRUITING BOARD JOB RECRUITING BOARD
GRAPHICAL PASSWORD SUFFLELING 2222222222
AGRICULTURE MANAGEMENT SYSTEM-1[ DDDDDDD
E VOTING intro_merged E VOTING intro_merged E VOTING intro_merged
EVENT MANAGEMENT SYSTEM.docx EVENT MANAGEMENT SYSTEM.docx EVENT MANAGEMENT SY...
Batch--7 Smart meter for liquid flow monitoring and leakage detection system ...
Criminal navigation using email tracking system.docx
AGRICUdfdfdfdfdfdfdLTURE MANAGEMENT SYSTEM-1[1].docx
online shopping for gadet using python project
graphical password authentical using machine learning document
online evening managemendddt using python
Multi Bank Transaction system oooooooooooo.docx
online shopping python online shopping project
Criminsdsdsdsdsal navigation using email tracking system.docx
Ad

Recently uploaded (20)

PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PPTX
1. Introduction to Computer Programming.pptx
PPT
Teaching material agriculture food technology
PDF
A comparative analysis of optical character recognition models for extracting...
PDF
Approach and Philosophy of On baking technology
PPTX
TLE Review Electricity (Electricity).pptx
PPTX
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
PDF
Empathic Computing: Creating Shared Understanding
PDF
August Patch Tuesday
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPTX
cloud_computing_Infrastucture_as_cloud_p
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Getting Started with Data Integration: FME Form 101
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
A comparative study of natural language inference in Swahili using monolingua...
PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PPTX
Tartificialntelligence_presentation.pptx
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
1. Introduction to Computer Programming.pptx
Teaching material agriculture food technology
A comparative analysis of optical character recognition models for extracting...
Approach and Philosophy of On baking technology
TLE Review Electricity (Electricity).pptx
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
Empathic Computing: Creating Shared Understanding
August Patch Tuesday
gpt5_lecture_notes_comprehensive_20250812015547.pdf
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
cloud_computing_Infrastucture_as_cloud_p
Network Security Unit 5.pdf for BCA BBA.
Getting Started with Data Integration: FME Form 101
Diabetes mellitus diagnosis method based random forest with bat algorithm
Programs and apps: productivity, graphics, security and other tools
A comparative study of natural language inference in Swahili using monolingua...
SOPHOS-XG Firewall Administrator PPT.pptx
Tartificialntelligence_presentation.pptx

IJEE - MACHINE LEARNING APPROACHES FOR IDENTIFYING NETWORK CYBER THREATS (2).docx

  • 1. A MAJOR PROJECT REPORT ON “MACHINE LEARNING APPROACHES FOR IDENTIFYING NETWORK CYBER THREATS” Submitted to SRI INDU COLLEGE OF ENGINEERING AND TECHNOLOGY, HYDERABAD In partial fulfillment of the requirements for the award of degree of BACHELOR OF TECHNOLOGY In COMPUTER SCIENCE AND ENGINEERING Submitted by S.SHARATH KUMAR [20D41A05K3] S.VENKATESH [20D41A05J6] M.KEERTHI [20D41A05N5] K.MANIKANTA [20D41A05N7] Under the esteemed guidance of Mrs. K.VIJAYA LAKSHMI (Assistant Professor) DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING SRI INDU COLLEGE OF ENGINEERING AND TECHNOLOGY (An Autonomous Institution under UGC, Accredited by NBA, Affiliated to JNTUH) Sheriguda (V), Ibrahimpatnam (M), Rangareddy Dist –501 510 (2023-2024)
  • 2. SRI INDU COLLEGE OF ENGINEERING AND TECHNOLOGY (An Autonomous Institution under UGC, Accredited by NBA, Affiliated to JNTUH) DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING CERTIFICATE Certified that the Major project entitled “MACHINE LEARNING APPROACHES FOR IDENTIFYING NETWORK CYBER THREATS” is a bonafide work carried out by S.SHARATH [20D41A05K3], S.VENKATESH [20D41A05J6], M.KEERTHI [20D41A05N5] K.MANIKANTA [20D41A05N7] in partial fulfillment for the award of degree of Bachelor of Technology in Computer Science and Engineering of SICET, Hyderabad for the academic year 2023-2024.The project has been approved as it satisfies academic requirements in respect of the work prescribed for IV Year, II-Semester of B. Tech course. INTERNAL GUIDE HEAD OF THE DEPARTMENT (Mrs. K. VIJAYA LAKSHMI) (Prof .Ch.G.V.N.Prasad) (Assistant Professor) EXTERNAL EXAMINER
  • 3. ACKNOWLEDGEMENT The satisfaction that accompanies the successful completion of the task would be put incomplete without the mention of the people who made it possible, whose constant guidance and encouragement crown all the efforts with success. We are thankful to Principal Dr. G.SURESH for giving us the permission to carry out this project. We are highly indebted to Prof. Ch.G.V.N.Prasad, Head of the Department of Computer Science Engineering, for providing necessary infrastructure and labs and also valuable guidance at every stage of this project. We are grateful to our internal project guide Mrs. K. VIJAYA LAKSHMI, Assistant Professor for her constant motivation and guidance given by him/her during the execution of this project work. We would like to thank the Teaching & Non-Teaching staff of Department of Computer Science and engineering for sharing their knowledge with us, last but not least we express our sincere thanks to everyone who helped directly or indirectly for the completion of this project. S.SHARATH KUMAR [20D41A05K3] S.VENKATESH [20D41A05J6] M.KEERTHI [20D41A05N5] K.MANIKANTA [20D41A05N7]
  • 4. ABSTRACT Contrasted with the past, improvements in PC and correspondence innovations have given broad and propelled changes. The use of new innovations give incredible advantages to people, organizations, and governments, be that as it may, messes some up against them. For instance, the protection of significant data, security of put away information stages, accessibility of information and so forth. Contingent upon these issues, digital fear based oppression is one of the most significant issues in this day and age. Digital fear, which made a great deal of issues people and establishments, has arrived at a level that could undermine open and nation security by different gatherings, for example, criminal association, proficient people and digital activists. Along these lines, Intrusion Detection Systems (IDS) has been created to maintain a strategic distance from digital assaults. Right now, learning the bolster support vector machine (SVM) calculations were utilized to recognize port sweep endeavors dependent on the new CICIDS2017 dataset with 97.80%, 69.79% precision rates were accomplished individually. Rather than SVM we can introduce some other algorithms like random forest, CNN, ANN where these algorithms can acquire accuracies like SVM – 93.29, CNN – 63.52, Random Forest – 99.93, ANN – 99.11.
  • 5. . . 1 08 09 .10 1 0 11 ....5 6-7 .. 8 . CONTENTS S.No. Chapters Page No. 1. INTRODUCTION 1.1 INTRODUCTION TO PROJECT............................................................. ............... 1.2 LITERATURE SURVEY........................................................ ............................. 1.3 MODULES..................................................... .......................................................... 2. SYSTEM ANALYSIS 2.1 EXISTING SYSTEM & ITS DISADVANTAGES................................. ................ 2.2 PROPOSED SYSTEM & ITS ADVANTAGES....................................... .............. 2.3 SYSTEM REQUIREMENTS..................................... .............................................. 3. SYSTEM STUDY 3.1 FEASIBILITY STUDY...................................................... ...................................... 4. SYSTEM DESIGN 4.1 ARCHITECTURE....................................... ............................................................... 4.2 UML DIAGRAMS................................................ ..................................................... 4.2.1 USECASE DIAGRAM................................................................ ................ 4.2.2 CLASS DIAGRAM........................................... ........................................... 4.2.3 SEQUENCE DIAGRAM.......................................... ................................... 5.TECHNOLOGIES USED 5.1 WHAT IS PYTHON?................................................................ ............................. 5.2 ADVANTAGES & DISADVANTAGES.......................... 5.3 HISTORY.................................................................. ................................
  • 7. . . 44 . 46 26 27 29 2 0 .21 . . . 18 ..18 4.2.3.1.1 WHAT IS ML? 4.2.3.2 CATEGORIES OF ML................................................... ............................ 4.2.3.2.1 NEED OF ML........................................ ............................................. 4.2.3.3 CHALLENGES IN ML................................................................... ............ 4.2.3.4 APPLICATIONS..................................................... ....................... 4.2.3.4 HOW TO START LEARNING ML?........................................................ 4.2.3.5 ADVANTAGES & DISADVANTAGES OF ML..................................... 4.2.4 PYTHON DEVELOPMENT STEPS.......................................................... 4.2.5 MODULES USED IN PYTHON........................................................... ..... 4.2.6 INSTALL PYTHON STEP BY STEP IN WINDOWS & MAC.............. 6 IMPLEMENTATION 6.1 SOFTWARE ENVIRONMENT.................................. .......................... 6.2 PYTHON............................................... .............................................. 6.3 SAMPLE CODE.......................................................... ......................... 7 SYSTEM TESTING 7.1 INTRODUCTION TO TESTING............................ 7.2 TESTING STRATEGIES..................................................... .................................. 8 SCREENSHOTS.................................... ................................................... 9 CONCLUSION...................................... ................................................... 10 REFERENCES....................................... ...................................................
  • 9. LIST OF FIGURES Fig No Name Page No Fig.1 Architecture diagram 12 Fig.2 Use case diagram 14 Fig.3 Class diagram 14 Fig.4 Sequence diagram 15
  • 10. LIST OF SCREENSHOTS Fig No Name Page No Fig.1-2 Run project and Upload data set 58 Fig.3-4 Preprocessing TF-IDF Algorithm 59 Fig.5-8 Neural Network Profiling 60-61 Fig.11-10 SVM Algorithm 62 Fig.12-13 KNN Algorithm and Random Forest Algorithm 63 Fig.14-15 Naïve Bayes algorithm 64 Fig.16-18 Comparison Graph 65-66
  • 11. 1.INTRODUCTION Contrasted with the past, improvements in PC and correspondence innovations have given broad and propelled changes. The use of new innovations give incredible advantages to people, organizations, and governments, be that as it may, messes some up against them. For instance, the protection of significant data, security of put away information stages, accessibility of information and so forth. Contingent upon these issues, digital fear based oppression is one of the most significant issues in this day and age. Digital fear, which made a great deal of issues people and establishments, has arrived at a level that could undermine openand nation security by different gatherings, for example, criminal association, proficient people and digital activists. Along these lines, Intrusion Detection Systems (IDS) has been created to maintain a strategic distance from digital assaults. Right now, learning the bolster support vector machine (SVM) calculations were utilized to recognize port sweep endeavors dependent on the new CICIDS2017 dataset with 97.80%, 69.79% precision rates were accomplished individually. Rather than SVM we can introduce some other algorithms like random forest, CNN, ANN where these algorithms can acquire accuracies like SVM – 93.29, CNN – 63.52, Random Forest – 99.93, ANN – 99.11. MOTIVATION The use of new innovations give incredible advantages to people, organizations, and governments, be that as it may, messes some up against them. For instance, the protection of significant data, security of put away information stages, accessibility of information and so forth. Contingent upon these issues, digital fear based oppression is one of the most significant issues in this day and age. Digital fear, which made a great deal of issues people and establishments, has arrived at a level that could undermine open and nation security by different gatherings, for example, criminal association, proficient people and digital activists. Along these lines, Intrusion Detection Systems (IDS) has been created to maintain a strategic distance from digital assaults. Objectives Objective of this project is to detect cyber attacks by using machine learning algorithms like • ANN • CNN • Random forest
  • 12. 1.2 LITERATURE SURVEY R. Christopher, “Port scanning techniques and the defense against them,” SANS Institute, 2001. Port Scanning is one of the most popular techniques attackers use to discover services that they can exploit to break into systems. All systems that are connected to a LAN or the Internet via a modem run services that listen to well-known and not so well-known ports. By port scanning, the attacker can find the following information about the targeted systems: what services are running, what users own those services, whether anonymous logins are supported, and whether certain network services require authentication. Port scanning is accomplished by sending a message to each port, one at a time. The kind of response received indicates whether the port is used and can be probed for further weaknesses. Port scanners are important to network security technicians because they can reveal possible security vulnerabilities on the targeted system.Every publicly available system has ports that are open and available for use. The object is to limit the exposure of open ports to authorized users and to deny access to the closed ports. S. Staniford, J. A. Hoagland, and J. M. McAlerney, “Practical automated detection of stealthy portscans,” Journal of Computer Security, vol. 10, no. 1-2, pp. 105–136, 2002. Portscanning is a common activity of considerable importance. It is often used by computer attackers to characterize hosts or networks which they are considering hostile activity against. Thus it is useful for system administrators and other network defenders to detect portscans as possible preliminaries to a more serious attack. It is also widely used by network defenders to understand and find vulnerabilities in their own networks. Thus it is of considerable interest to attackers to determine whether or not the defenders of a network are portscanning it regularly. However, defenders will not usually wish to hide their portscanning, while attackers will. For definiteness, in the remainder of this paper, we will speak of the attackers scanning the network, and the defenders trying to detect the scan. One concerns whether portscanning of remote networks without permission from the owners is itself a legal and ethical activity. This is presently a grey area in most jurisdictions.So we think it reasonable to consider a portscan as at least potentially hostile, and to report it to the administrators of the remote network from whence it came. However, this paper is focussed on the technical questions of how to detect portscans, which are independent of what significance one imbues them with, or how one chooses to respond to them.
  • 13. In the next section, we discuss a variety of prior work on portscan detection. Then we present the algorithms that we propose to use, and give some very preliminary data justifying our approach. Finally, we consider possibleextensions to this work, along with other applications that might be considered.The primary purpose is that of gathering information about the reachability and status of certain combinations of IP address and port (either TCP or UDP).The secondary purpose is to flood intrusion detection systems with alerts, with the intention of distracting the network defenders or preventing them from doing their jobs. We will use the term scan footprint for the set of port/IP combinations which the attacker is interested in characterizing. The most common type of portscan footprint at present is a horizontal scan. By this, we mean that an attacker has an exploit for a particular service, and is interested in finding any hosts that expose that service. M. C. Raja and M. M. A. Rabbani, “Combined analysis of support vector machine and principle component analysis for ids,” in IEEE International Conference on Communication and Electronics Systems, 2016, pp. 1–5. Compared to the past security of networked systems has become a critical universal issue that influences individuals, enterprises and governments.Based on the detection technique, intrusion detection is classified into anomaly-based and signature-based. The authors examined the performance of these features with different algorithms that included: K-Nearest Neighbor (KNN), Adaboost, Multi-Layer Perceptron (MLP), Naïve Bayes, Random Forest (RF), Iterative Dichotomiser 3 (ID3) and Quadratic Discriminant Analysis (QDA). The highest precision value was 0.98 with RF and ID3 [4]. The execution time (time to build the model) was 74.39 s. This is while the execution time for our proposed system using Random Forest is 21.52 s with a comparable processor. Some of them were discussed here..The developers used statistical metrics such as minimum, maximum, mean and standard deviation to encapsulate the network events into a set of certain features which include: 1. The distribution of the packet size 2. The number of packets per flow 3. The size of the payload 4. The request time distribution of the protocols 5. Certain patterns in the payload Moreover, CICIDS2017 covers various attack scenarios that represent common attack families. The attacks include Brute Force Attack, Heart Bleed Attack, Botnet, DoS Attack, Distributed DoS (DDoS) Attack , Web Attack, and Infiltration Attack.Moreover SVM requires the processing of raw features for classification which increases the architecture complexity and decreases the accuracy of detecting intrusion 1.3 DEEP LEARNING Deep learning is an improved machine learning technique for feature extraction, perception and learning of machines.
  • 14. Deep learning algorithms performs their operations using multiple consecutive layers. There are many application areas for Deep Learning, which covers such as Image Processing, Natural Language Processing, biomedical, Customer Relationship Management automation, Vehicle autonomous systems and others. 1.3 MODULES This project consists of 4 modules 1. DATA COLLECTION 2. DATA PRE-PROCESSING 3. FEATURE EXTRATION 4. EVALUATION MODEL 1. DATA COLLECTION: Gathering essential data (like network traffic details) from the CICIDS2017 dataset, vital for identifying port scan attempts and potential security threats. 2. DATA PRE-PROCESSING: Cleaning, handling missing values, and organizing the data to make it compatible and optimal for machine learning algorithms to process effectively and accurately. 3. FEATURE EXTRATION: Selecting and deriving crucial attributes (e.g., packet size, protocol types) from the organized data that serve as inputs for machine learning models to identify patterns related to port scan attempts. 4. EVALUATION MODEL: Applying diverse algorithms like SVM, Random Forest, CNN, and ANN to the extracted features, training these models on a subset of data, and assessing their performance in accurately detecting port scan attempts to enhance cybersecurity measures.
  • 15. 2.SYSTEM ANALYSIS 2.1 EXISTING SYSTEM The existing system for detecting cyber attacks in networks typically relies on traditional signature-based detection methods, which rely on pre-defined patterns of known attacks to identify new attacks. These methods are limited in their ability to detect new and evolving types of attacks and may generate false positives or negatives. Disadvantages 1) Strict Regulations 2) Difficult to work with for non-technical users 3) Restrictive to resources 4) Constantly needs Patching 5) Constantly being attacked 2.2 PROPOSED SYSTEM The proposed system for detecting cyber attacks in networks using machine learning techniques aims to address the limitations of existing systems. The proposed system uses machine learning algorithms to analyze network traffic patterns and detect anomalies that may indicate a cyber attack. The system can be trained to recognize normal network behavior and identify deviations from this behavior that may indicate an attack. The proposed system can also be enhanced with additional features such as real-time monitoring, automatic response mechanisms, and integration with other security systems. Real-time monitoring allows the system to detect attacks as they occur, and automatic response mechanisms can help mitigate the damage caused by attacks. Integration with other security systems, such as firewalls and intrusion detection systems, can improve overall network security and enhance the effectiveness of the proposed system. Advantages • Protection from malicious attacks on your network. • Deletion and/or guaranteeing malicious elements within a preexisting network. • Prevents users from unauthorized access to the network. • Deny's programs from certain resources that could be infected. • Securing confidential information
  • 16. 2.3 SYSTEM REQUIREMENTS: SOFTWARE REQUIREMENTS The functional requirements or the overall description documents include the product perspective and features, operating system and operating environment, graphics requirements, design constraints and user documentation.The appropriation of requirements and implementation constraints gives the general overview of the project in regards to what the areas of strength and deficit are and how to tackle them. • Python idle 3.7 version (or) • Anaconda 3.7 ( or) • Jupiter (or) • Google Colab HARDWARE REQUIREMENTS Minimum hardware requirements are very dependent on the particular software being developed by a given En thought Python / Canopy / VS Code user. Applications that need to store large arrays/objects in memory will require more RAM, whereas applications that need toperform numerous calculations or tasks more quickly will require a faster processor. • Operating system : Windows, Linux • Processor : Intel i3 • RAM : 4 GB
  • 17. 3. SYSTEM STUDY 3.1 FEASIBILITY STUDY The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding of the major requirements for the system is essential.Three key considerations involved in the feasibility analysis are ECONOMICAL FEASIBILITY TECHNICAL FEASIBILITY SOCIAL FEASIBILITY 1. ECONOMICAL FEASIBILITY This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased. 2. TECHNICAL FEASIBILITY This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. 3. SOCIAL FEASIBILITY The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it.
  • 19. 4.2 UML DIAGRAMS UML stands for Unified Modeling Language. UML is a standardized general-purpose modeling language in the field of object-oriented software engineering. The standard is managed, and was created by, the Object Management Group. The goal is for UML to become a common language for creating models of object oriented computer software. In its current form UML is comprised of two major components: a Meta-model and a notation. In the future, some form of method or process may also be added to or associated with, UML. The Unified Modeling Language is a standard language for specifying, Visualization, Constructing and documenting the artifacts of software system, as well as for business modeling and other nonsoftware systems. The UML represents a collection of best engineering practices that have proven successful in the modeling of large and complex systems. The UML is a very important part of developing objects oriented software and the software development process. The UML uses mostly graphical notations to express the design of software projects. GOALS: The Primary goals in the design of the UML are as follows: 1. Provide users a ready-to-use, expressive visual modeling Language so that they can develop and exchange meaningful models. 2. Provide extensibility and specialization mechanisms to extend the core concepts. 3. Be independent of particular programming languages and development process. 4. Provide a formal basis for understanding the modeling language. 5. Encourage the growth of OO tools market. 6. Support higher level development concepts such as collaborations, frameworks, patterns and components. 7. Integrate best practices. 4.2.1 USE CASE DIAGRAM: A use case diagram in the Unified Modeling Language (UML) is a type of behavioral diagramdefined by and created from a Use-case analysis. Its purpose is to present a graphical overviewof the functionality provided by a system in terms of actors, their goals (represented as use cases), and any dependencies between those use cases.
  • 20. Fig.2 4.2.2 CLASS DIAGRAM: In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of static structure diagram that describes the structure of a system by showing the system's classes, their attributes, operations (or methods), and the relationships among the classes. It explains which class contains information. Fig.
  • 21. 4.2.3 SEQUENCE DIAGRAM: A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram that shows how processes operate with one another and in what order. It is a construct of a Message Sequence Chart. Sequence diagrams are sometimes called event diagrams, event scenarios, and timing diagrams. Fig.4
  • 22. 5.TECHNOLOGIES USED 5.1 WHAT IS PYTHON? Below are some facts about Python. Python is currently the most widely used multi-purpose, high-level programming language. Python allows programming in Object-Oriented and Procedural paradigms. Python programs generally are smaller than other programming languages like Java. Programmers have to type relatively less and indentation requirement of the language, makes them readable all the time. Python language is being used by almost all tech-giant companies like – Google, Amazon, Facebook, Instagram, Dropbox, Uber… etc. The biggest strength of Python is huge collection of standard library which can be used for the following – • Machine Learning • GUI Applications (like Kivy, Tkinter, PyQt etc. ) • Web frameworks like Django (used by YouTube, Instagram, Dropbox) • Image processing (like Opencv, Pillow) • Web scraping (like Scrapy, BeautifulSoup, Selenium) • Test frameworks • Multimedia 5.1.1 ADVANTAGES & DIADVANTAGES OF PYTHON Advantages of Python :- Let’s see how Python dominates over other languages. 1. Extensive Libraries Python downloads with an extensive library and it contain code for various purposes like regular expressions, documentation-generation, unit-testing, web browsers, threading,
  • 23. databases, CGI, email, image manipulation, and more. So, we don’t have to write the complete code for that manually. 2. Extensible As we have seen earlier, Python can be extended to other languages. You can write some of your code in languages like C++ or C. This comes in handy, especially in projects. 3. Embeddable Complimentary to extensibility, Python is embeddable as well. You can put your Python code in your source code of a different language, like C++. This lets us add scripting capabilities to our code in the other language. 4. Improved Productivity The language’s need to be in simplicity and extensive libraries render programmers more productive than languages like Java and C++ do. Also, the fact that you need to write less and get more things done. 5. IOT Opportunities Since Python forms the basis of new platforms like Raspberry Pi, it finds the future bright for the Internet Of Things. This is a way to connect the language with the real world. 6. Simple and Easy When working with Java, you may have to create a class to print ‘Hello World’. But in Python, just a print statement will do. It is also quite easy to learn, understand, and code. This is why when people pick up Python, they have a hard time adjusting to other more verbose languages like Java. 7. Readable Because it is not such a verbose language, reading Python is much like reading English. This is the reason why it is so easy to learn, understand, and code. It also does not need curly braces to define blocks, and indentation is mandatory. This further aids the readability of the code.
  • 24. 8. Object-Oriented This language supports both the procedural and object-oriented programming paradigms. While functions help us with code reusability, classes and objects let us model the real world. A class allows the encapsulation of data and functions into one. 9. Free and Open-Source Like we said earlier, Python is freely available. But not only can you download Python for free, but you can also download its source code, make changes to it, and even distribute it. It downloads with an extensive collection of libraries to help you with your tasks. 10. Portable When you code your project in a language like C++, you may need to make some changes to it if you want to run it on another platform. But it isn’t the same with Python. Here, you need to code only once, and you can run it anywhere. This is called Write Once Run Anywhere (WORA). However, you need to be careful enough not to include any systemdependent features. 11. Interpreted Lastly, we will say that it is an interpreted language. Since statements are executed one by one, debugging is easier than in compiled languages. Advantages of Python Over Other Languages 1. Less Coding Almost all of the tasks done in Python requires less coding when the same task is done in other languages. Python also has an awesome standard library support, so you don’t have to search for any third-party libraries to get your job done. This is the reason that many people suggest learning Python to beginners.
  • 25. 2. Affordable Python is free therefore individuals, small companies or big organizations can leverage the free available resources to build applications. Python is popular and widely used so it gives you better community support. The 2019 Github annual survey showed us that Python has overtaken Java in the most popular programming language category. 3. Python is for Everyone Python code can run on any machine whether it is Linux, Mac or Windows. Programmers need to learn different languages for different jobs but with Python, you can professionally build web apps, perform data analysis and machine learning, automate things, do web scraping and also build games and powerful visualizations. It is an all-rounder programming language. Disadvantages of Python So far, we’ve seen why Python is a great choice for your project. But if you choose it, you should be aware of its consequences as well. Let’s now see the downsides of choosing Python over another language. 1. Speed Limitations We have seen that Python code is executed line by line. But since Python is interpreted, it often results in slow execution. This, however, isn’t a problem unless speed is a focal point for the project. In other words, unless high speed is a requirement, the benefits offered by Python are enough to distract us from its speed limitations. 2. Weak in Mobile Computing and Browsers While it serves as an excellent server-side language, Python is much rarely seen on the client-side. Besides that, it is rarely ever used to implement smartphone-based applications. One such application is called Carbonnelle. The reason it is not so famous despite the existence of Brython is that it isn’t that secure.
  • 26. 3. Design Restrictions As you know, Python is dynamically-typed. This means that you don’t need to declare the type of variable while writing the code. It uses duck-typing. But wait, what’s that? Well, it just means that if it looks like a duck, it must be a duck. While this is easy on the programmers during coding, it can raise run-time errors. 4. Underdeveloped Database Access Layers Compared to more widely used technologies like JDBC (Java DataBase Connectivity) and ODBC (Open DataBase Connectivity), Python’s database access layers are a bit underdeveloped. Consequently, it is less often applied in huge enterprises. 5. Simple No, we’re not kidding. Python’s simplicity can indeed be a problem. Take my example. I don’t do Java, I’m more of a Python person. To me, its syntax is so simple that the verbosity of Java code seems unnecessary. This was all about the Advantages and Disadvantages of Python Programming Language. 5.1.2 HISTORY OF PYTHON What do the alphabet and the programming language Python have in common? Right, both start with ABC. If we are talking about ABC in the Python context, it's clear that the programming language ABC is meant. ABC is a general-purpose programming language and programming environment, which had been developed in the Netherlands, Amsterdam, at the CWI (Centrum Wiskunde &Informatica). The greatest achievement of ABC was to influence the design of Python. Python was conceptualized in the late 1980s. Guido van Rossum worked that time in a project at the CWI, called Amoeba, a distributed operating system. In an interview with Bill Venners1 , Guido van Rossum said: "In the early 1980s, I worked as an implementer on a team building a language called ABC at Centrum Wiskunde en Informatica (CWI). I don't know how well people know ABC's influence on Python. I try to mention ABC's influence because I'm indebted to everything I learned during that project and to the people who worked on it. Later on in the same Interview, Guido van Rossum continued: "I remembered all my experience and some of my frustration with
  • 27. ABC. I decided to try to design a simple scripting language that possessed some of ABC's better properties, but without its problems. So I started typing. I created a simple virtual machine, a simple parser, and a simple runtime. I made my own version of the various ABC parts that I liked. I created a basic syntax, used indentation for statement grouping instead of curly braces or begin-end blocks, and developed a small number of powerful data types: a hash table (or dictionary, as we call it), a list, strings, and numbers." 5.2 WHAT IS MACHINE LEARNING Before we take a look at the details of various machine learning methods, let's start by looking at what machine learning is, and what it isn't. Machine learning is often categorized as a subfield of artificial intelligence, but I find that categorization can often be misleading at first brush. The study of machine learning certainly arose from research in this context, but in the data science application of machine learning methods, it's more helpful to think of machine learning as a means of building models of data. Fundamentally, machine learning involves building mathematical models to help understand data. "Learning" enters the fray when we give these models tunable parameters that can be adapted to observed data; in this way the program can be considered to be "learning" from the data. Once these models have been fit to previously seen data, they can be used to predict and understand aspects of newly observed data. I'll leave to the reader the more philosophical digression regarding the extent to which this type of mathematical, model-based "learning" is similar to the "learning" exhibited by the human brain. Understanding the problem setting in machine learning is essential to using these tools effectively, and so we will start with some broad categorizations of the types of approaches we'll discuss here. 5.2.1 Categories Of Machine Leaning At the most fundamental level, machine learning can be categorized into two main types: supervised learning and unsupervised learning. Supervised learning involves somehow modeling the relationship between measured features of data and some label associated with the data; once this model is determined, it can be used to apply labels to new, unknown data. This is further subdivided into
  • 28. classification tasks and regression tasks: in classification, the labels are discrete categories, while in regression, the labels are continuous quantities. We will see examples of both types of supervised learning in the following section. Unsupervised learning involves modeling the features of a dataset without reference to any label, and is often described as "letting the dataset speak for itself." These models include tasks such as clustering and dimensionality reduction. Clustering algorithms identify distinct groups of data, while dimensionality reduction algorithms search for more succinct representations of the data. We will see examples of both types of unsupervised learning in the following section. 5.2.2 Need for Machine Learning Human beings, at this moment, are the most intelligent and advanced species on earth because they can think, evaluate and solve complex problems. On the other side, AI is still in its initial stage and haven’t surpassed human intelligence in many aspects. Then the question is that what is the need to make machine learn? The most suitable reason for doing this is, “to make decisions, based on data, with efficiency and scale”. Lately, organizations are investing heavily in newer technologies like Artificial Intelligence, Machine Learning and Deep Learning to get the key information from data to perform several real-world tasks and solve problems. We can call it data-driven decisions taken by machines, particularly to automate the process. These data-driven decisions can be used, instead of using programing logic, in the problems that cannot be programmed inherently. The fact is that we can’t do without human intelligence, but other aspect is that we all need to solve real-world problems with efficiency at a huge scale. That is why the need for machine learning arises. 5.2.3 Challenges in Machines Learning While Machine Learning is rapidly evolving, making significant strides with cybersecurity and autonomous cars, this segment of AI as whole still has a long way to go. The reason behind is that ML has not been able to overcome number of challenges. The challenges that ML is facing currently are −
  • 29. Quality of data Having good-quality data for ML algorithms is one of the − biggest challenges. Use of low-quality data leads to the problems related to data preprocessing and feature extraction. Time-Consuming task Another challenge faced by ML models is the − consumption of time especially for data acquisition, feature extraction and retrieval. Lack of specialist persons As ML technology is still in its infancy stage, − availability of expert resources is a tough job. No clear objective for formulating business problems Having no clear objective − and well -defined goal for business problems is another key challenge for ML because this technology is not that mature yet. Issue of overfitting & underfitting If the model is overfitting or underfitting, it cannot − be represented well for the problem. Curse of dimensionality Another challenge ML model faces is too many − features of data points. This can be a real hindrance. Difficulty in deployment Complexity of the ML model makes it quite difficult to be − deployed in real life. 5.2.4 Applications of Machines Learning :- Machine Learning is the most rapidly growing technology and according to researchers we are in the golden year of AI and ML. It is used to solve many real-world complex problems which cannot be solved with traditional approach. Following are some real-world applications of ML − • Emotion analysis • Sentiment analysis • Error detection and prevention • Weather forecasting and prediction
  • 30. • Stock market analysis and forecasting • Speech synthesis • Speech recognition • Customer segmentation
  • 31. • Object recognition • Fraud detection • Fraud prevention • Recommendation of products to customer in online shopping 5.2.5 How to Start Learning Machine Learning? Arthur Samuel coined the term “Machine Learning” in 1959 and defined it as a “Field of study that gives computers the capability to learn without being explicitly programmed”. And that was the beginning of Machine Learning! In modern times, Machine Learning is one of the most popular (if not the most!) career choices. According to Indeed, Machine Learning Engineer Is The Best Job of 2019 with a 344% growth and an average base salary of $146,085 per year. But there is still a lot of doubt about what exactly is Machine Learning and how to start learning it? So this article deals with the Basics of Machine Learning and also the path you can follow to eventually become a full-fledged Machine Learning Engineer. Now let’s get started!!! How to start learning ML? This is a rough roadmap you can follow on your way to becoming an insanely talented Machine Learning Engineer. Of course, you can always modify the steps according to your needs to reach your desired end-goal! Step 1 – Understand the Prerequisites In the case, you are a genius, you could start ML directly but normally, there are some prerequisites that you need to know which include Linear Algebra, Multivariate Calculus, Statistics, and Python. And if you don’t know these, never fear! You don’t need Ph.D.degree in these topics to get started but you do need a basic understanding. (a) Learn Linear Algebra and Multivariate Calculus Both Linear Algebra and Multivariate Calculus are important in Machine Learning. However, the extent to which you need them depends on your role as a data scientist. If you
  • 32. are more focused on application heavy machine learning, then you will not be that heavily focused on maths as there are many common libraries available. But if you want to focus on R&D in Machine Learning, then mastery of Linear Algebra and Multivariate Calculus is very important as you will have to implement many ML algorithms from scratch. (b) Learn Statistics Data plays a huge role in Machine Learning. In fact, around 80% of your time as an ML expert will be spent collecting and cleaning data. And statistics is a field that handles the collection, analysis, and presentation of data. So it is no surprise that you need to learn it!!! Some of the key concepts in statistics that are important are Statistical Significance, Probability Distributions, Hypothesis Testing, Regression, etc. Also, Bayesian Thinking is also a very important part of ML which deals with various concepts like Conditional Probability, Priors, and Posteriors, Maximum Likelihood, etc. (c) Learn Python Some people prefer to skip Linear Algebra, Multivariate Calculus and Statistics and learn them as they go along with trial and error. But the one thing that you absolutely cannot skip is Python! While there are other languages you can use for Machine Learning like R, Scala, etc. Python is currently the most popular language for ML. In fact, there are many Python libraries that are specifically useful for Artificial Intelligence and Machine Learning such as Keras, TensorFlow, Scikit-learn, etc. So if you want to learn ML, it’s best if you learn Python! You can do that using various online resources and courses such as Fork Python available Free on GeeksforGeeks. Step 2 – Learn Various ML Concepts Now that you are done with the prerequisites, you can move on to actually learning ML (Which is the fun part!!!) It’s best to start with the basics and then move on to more complicated stuff. Some of the basic concepts in ML are:
  • 33. (a) Terminologies of Machine Learning • Model – A model is a specific representation learned from data by applying some machine learning algorithm. A model is also called a hypothesis. • Feature – A feature is an individual measurable property of the data. A set of numeric features can be conveniently described by a feature vector. Feature vectors are fed as input to the model. For example, in order to predict a fruit, there may be features like color, smell, taste, etc. • Target (Label) – A target variable or label is the value to be predicted by our model. For the fruit example discussed in the feature section, the label with each set of input would be the name of the fruit like apple, orange, banana, etc. • Training – The idea is to give a set of inputs(features) and it’s expected outputs(labels), so after training, we will have a model (hypothesis) that will then map new data to one of the categories trained on. • Prediction – Once our model is ready, it can be fed a set of inputs to which it will provide a predicted output(label). (b) Types of Machine Learning • Supervised Learning – This involves learning from a training dataset with labeled data using classification and regression models. This learning process continues until the required level of performance is achieved. • Unsupervised Learning – This involves using unlabelled data and then finding the underlying structure in the data in order to learn more and more about the data itself using factor and cluster analysis models. • Semi-supervised Learning – This involves using unlabelled data like Unsupervised Learning with a small amount of labeled data. Using labeled data vastly increases the learning accuracy and is also more cost-effective than Supervised Learning. • Reinforcement Learning – This involves learning optimal actions through trial and error. So the next action is decided by learning behaviors that are based on the current state and that will maximize the reward in the future.
  • 34. 5.2.6 ADVANTAGES & DISADVANTAGES OF ML Advantages of Machine learning :- 1. Easily identifies trends and patterns - Machine Learning can review large volumes of data and discover specific trends and patterns that would not be apparent to humans. For instance, for an e-commerce website like Amazon, it serves to understand the browsing behaviors and purchase histories of its users to help cater to the right products, deals, and reminders relevant to them. It uses the results to reveal relevant advertisements to them. 2. No human intervention needed (automation) With ML, you don’t need to babysit your project every step of the way. Since it means giving machines the ability to learn, it lets them make predictions and also improve the algorithms on their own. A common example of this is anti-virus softwares. they learn to filter new threats as they are recognized. ML is also good at recognizing spam. 3. Continuous Improvement As ML algorithms gain experience, they keep improving in accuracy and efficiency. This lets them make better decisions. Say you need to make a weather forecast model. As the amount of data you have keeps growing, your algorithms learn to make more accurate predictions faster. 4. Handling multi-dimensional and multi-variety data Machine Learning algorithms are good at handling data that are multi-dimensional and multivariety, and they can do this in dynamic or uncertain environments. 5. Wide Applications You could be an e-tailer or a healthcare provider and make ML work for you. Where it does apply, it holds the capability to help deliver a much more personal experience to customers while also targeting the right customers.
  • 35. Disadvantages of Machine Learning :- 1. Data Acquisition Machine Learning requires massive data sets to train on, and these should be inclusive/unbiased, and of good quality. There can also be times where they must wait for new data to be generated. 2. Time and Resources ML needs enough time to let the algorithms learn and develop enough to fulfill their purpose with a considerable amount of accuracy and relevancy. It also needs massive resources to function. This can mean additional requirements of computer power for you. 3. Interpretation of Results Another major challenge is the ability to accurately interpret results generated by the algorithms. You must also carefully choose the algorithms for your purpose. 4. High error-susceptibility Machine Learning is autonomous but highly susceptible to errors. Suppose you train an algorithm with data sets small enough to not be inclusive. You end up with biased predictions coming from a biased training set. This leads to irrelevant advertisements being displayed to customers. In the case of ML, such blunders can set off a chain of errors that can go undetected for long periods of time. And when they do get noticed, it takes quite some time to recognize the source of the issue, and even longer to correct it. 5.3 PYTHON DEVELOPMENT STEPS Guido Van Rossum published the first version of Python code (version 0.9.0) at alt.sources in February 1991. This release included already exception handling, functions, and the core data types of list, dict, str and others. It was also object oriented and had a module system. Python version 1.0 was released in January 1994. The major new features included in this release were the functional programming tools lambda, map, filter and reduce, which Guido Van Rossum never liked.Six and a half years later in October 2000, Python 2.0 was
  • 36. introduced. This release included list comprehensions, a full garbage collector and it was supporting Unicode Python flourished for another 8 years in the versions 2.x before the next major release as Python 3.0 (also known as "Python 3000" and "Py3K") was released. Python 3 is not backwards compatible with Python 2.x. The emphasis in Python 3 had been on the removal of duplicate programming constructs and modules, thus fulfilling or coming close to fulfilling the 13th law of the Zen of Python: "There should be one -- and preferably only one -- obvious way to do it. Some changes in Python 7.3: • Print is now a function • Views and iterators instead of lists • The rules for ordering comparisons have been simplified. E.g. a heterogeneous list cannot be sorted, because all the elements of a list must be comparable to each other. • There is only one integer type left, i.e. int. long is int as well. • The division of two integers returns a float instead of an integer. "//" can be used to have the "old" behaviour. • Text Vs. Data Instead Of Unicode Vs. 8-bit Purpose :- We demonstrated that our approach enables successful segmentation of intra-retinal layers—even with low-quality images containing speckle noise, low contrast, and different intensity ranges throughout—with the assistance of the ANIS feature. Python Python is an interpreted high-level programming language for general-purpose programming. Created by Guido van Rossum and first released in 1991, Python has a design philosophy that emphasizes code readability, notably using significant whitespace. Python features a dynamic type system and automatic memory management. It supports multiple programming paradigms, including object-oriented, imperative, functional and procedural, and has a large and comprehensive standard library. • Python is Interpreted Python is processed at runtime by the − interpreter. You do not need to compile your program before executing it. This is similar to PERL and PHP. • Python is Interactive you can actually sit at a Python prompt and −
  • 37. interact with the interpreter directly to write your programs.
  • 38. • Python also acknowledges that speed of development is important. Readable and terse code is part of this, and so is access to powerful constructs that avoid tedious repetition of code. Maintainability also ties into this may be an all but useless metric, but it does say something about how much code you have to scan, read and/or understand to troubleshoot problems or tweak behaviors. This speed of development, the ease with which a programmer of other languages can pick up basic Python skills and the huge standard library is key to another area where Python excels. All its tools have been quick to implement, saved a lot of time, and several of them have later been patched and updated by people with no Python background - without breaking. 5.4 MODULES USED IN PROJECT Tensorflow TensorFlow is a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library, and is also used for machine learning applications such as neural networks. It is used for both research and production at Google. TensorFlow was developed by the Google Brain team for internal Google use. It was released under the Apache 2.0 open-source license on November 9, 2015. Numpy Numpy is a general-purpose array-processing package. It provides a high-performance multidimensional array object, and tools for working with these arrays. It is the fundamental package for scientific computing with Python. It contains various features including these important ones: • A powerful N-dimensional array object • Sophisticated (broadcasting) functions • Tools for integrating C/C++ and Fortran code • Useful linear algebra, Fourier transform, and random number capabilities • Besides its obvious scientific uses, Numpy can also be used as an efficient multidimensional container of generic data. Arbitrary data-types can be defined using Numpy which allows Numpy to seamlessly and speedily integrate with a wide varieties.
  • 39. Pandas Pandas is an open-source Python Library providing high-performance data manipulation and analysis tool using its powerful data structures. Python was majorly used for data munging and preparation. It had very little contribution towards data analysis. Pandas solved this problem. Using Pandas, we can accomplish five typical steps in the processing and analysis of data, regardless of the origin of data load, prepare, manipulate, model, and analyze. Python with Pandas is used in a wide range of fields including academic and commercial domains including finance, economics, Statistics, analytics, etc. Matplotlib Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. Matplotlib can be used in Python scripts, the Python and IPython shells, the Jupyter Notebook, web application servers, and four graphical user interface toolkits. Matplotlib tries to make easy things easy and hard things possible. You can generate plots, histograms, power spectra, bar charts, error charts, scatter plots, etc., with just a few lines of code. For examples, see the sample plots and thumbnail gallery. For simple plotting the pyplot module provides a MATLAB-like interface, particularly when combined with IPython. For the power user, you have full control of line styles, font properties, axes properties, etc. via an object oriented interface or via a set of functions familiar to MATLAB users. Scikit – learn Scikit-learn provides a range of supervised and unsupervised learning algorithms via a consistent interface in Python. It is licensed under a permissive simplified BSD license and is distributed under many Linux distributions, encouraging academic and commercial use. Python Python is an interpreted high-level programming language for general-purpose programming. Created by Guido van Rossum and first released in 1991, Python has a design philosophy that emphasizes code readability, notably using significant whitespace.
  • 40. Python features a dynamic type system and automatic memory management. It supports multiple programming paradigms, including object-oriented, imperative, functional and procedural, and has a large and comprehensive standard library. • Python is Interpreted Python is processed at runtime by the − interpreter. You do not need to compile your program before executing it. This is similar to PERL and PHP. • Python is Interactive you can actually sit at a Python prompt and − interact with the interpreter directly to write your programs. Python also acknowledges that speed of development is important. Readable and terse code is part of this, and so is access to powerful constructs that avoid tedious repetition of code. Maintainability also ties into this may be an all but useless metric, but it does say something about how much code you have to scan, read and/or understand to troubleshoot problems or tweak behaviors. This speed of development, the ease with which a programmer of other languages can pick up basic Python skills and the huge standard library is key to another area where Python excels. All its tools have been quick to implement, saved a lot of time, and several of them have later been patched and updated by people with no Python background - without breaking. 5.5 INSTALL PYTHON STEP-BY-STEP IN WINDOWS AND MAC Python a versatile programming language doesn’t come pre-installed on your computer devices. Python was first released in the year 1991 and until today it is a very popular high-level programming language. Its style philosophy emphasizes code readability with its notable use of great whitespace. The object-oriented approach and language construct provided by Python enables programmers to write both clear and logical code for projects. This software does not come pre-packaged with Windows. How to Install Python on Windows and Mac : There have been several updates in the Python version over the years. The question is how to install Python? It might be confusing for the beginner who is willing to start learning Python but this tutorial will solve your query. The latest or the newest version of
  • 42. 3.7.4 or in other words, it is Python 3. Note: The python version 3.7.4 cannot be used on Windows XP or earlier devices. Before you start with the installation process of Python. First, you need to know about your System Requirements. Based on your system type i.e. operating system and based processor, you must download the python version. My system type is a Windows 64-bit operating system. So the steps below are to install python version 3.7.4 on Windows 7 device or to install Python 3. Download the Python Cheatsheet here. The steps on how to install Python on Windows 10, 8 and 7 are divided into 4 parts to help understand better. Download the Correct version into the system Step 1: Go to the official site to download and install python using Google Chrome or any other web browser. OR Click on the following link: https://www.python.org Now, check for the latest and the correct version for your operating system. Step 2: Click on the Download Tab.
  • 43. Step 3: You can either select the Download Python for windows 3.7.4 button in Yellow Color or you can scroll further down and click on download with respective to their version. Here, we are downloading the most recent python version for windows 3.7.4 Step 4: Scroll down the page until you find the Files option. Step 5: Here you see a different version of python along with the operating system.
  • 44. • To download Windows 32-bit python, you can select any one from the three options: Windows x86 embeddable zip file, Windows x86 executable installer or Windows x86 webbased installer. • To download Windows 64-bit python, you can select any one from the three options: Windows x86-64 embeddable zip file, Windows x86-64 executable installer or Windows x8664 web-based installer. Here we will install Windows x86-64 web-based installer. Here your first part regarding which version of python is to be downloaded is completed. Now we move ahead with the second part in installing python i.e. Installation Note: To know the changes or updates that are made in the version you can click on the Release Note Option. Installation of Python Step 1: Go to Download and Open the downloaded python version to carry out the installation process.
  • 45. Step 2: Before you click on Install Now, Make sure to put a tick on Add Python 3.7 to PATH. Step 3: Click on Install NOW After the installation is successful. Click on Close.
  • 46. With these above three steps on python installation, you have successfully and correctly installed Python. Now is the time to verify the installation. Note: The installation process might take a couple of minutes. Verify the Python Installation Step 1: Click on Start Step 2: In the Windows Run Command, type “cmd”.
  • 47. Step 3: Open the Command prompt option. Step 4: Let us test whether the python is correctly installed. Type python –V and press Enter. Step 5: You will get the answer as 3.7.4 Note: If you have any of the earlier versions of Python already installed. You must first uninstall the earlier version and then install the new one. Check how the Python IDLE works Step 1: Click on Start Step 2: In the Windows Run command, type “python idle”. Step 3: Click on IDLE (Python 3.7 64-bit) and launch the program Step 4: To go ahead with working in IDLE you must first save the file. Click on File > Click on Save
  • 48. Step 5: Name the file and save as type should be Python files. Click on SAVE. Here I have named the files as Hey World. Step 6: Now for e.g. enter print
  • 49. 6. IMPLEMENTATIONS 6.1 SOFTWARE ENVIRONMENT 6.1.1 PYTHON Python is a general-purpose interpreted, interactive, object-oriented, and high-level programming language. An interpreted language, Python has a design philosophy that emphasizes code readability (notably using whitespace indentation to delimit code blocks rather than curly brackets or keywords), and a syntax that allows programmers to express concepts in fewer lines of code than might be used in languages such as C++or Java. It provides constructs that enable clear programming on both small and large scales. Python interpreters are available for many operating systems. C, Python, the reference implementation of Python, is open source software and has a community-based development model, as do nearly all of its variant implementations. C, Python is managed by the non-profit Python Software Foundation. Python features a dynamic type system and automatic memory management. Interactive Mode Programming. 6.1.2 SAMPLE CODE from tkinter import messagebox from tkinter import * from tkinter import simpledialog import tkinter from tkinter import filedialog import matplotlib.pyplot as plt import numpy as np from tkinter.filedialog import askopenfilename import os import pandas as pd from sklearn import preprocessing from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer from sklearn import svm from sklearn.metrics import accuracy_score
  • 50. from sklearn.model_selection import train_test_split from keras.models import Sequential from keras.layers import Flatten from keras.layers import Dense,Activation,Dropout from sklearn.preprocessing import OneHotEncoder import keras.layers from keras.layers import Convolution2D from keras.layers import MaxPooling2D from keras.layers import Flatten from keras.layers import Dense,Activation,BatchNormalization,Dropout from sklearn.metrics import precision_score from sklearn.metrics import recall_score from sklearn.metrics import f1_score from sklearn.naive_bayes import BernoulliNB from sklearn.neighbors import KNeighborsClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier main = tkinter.Tk() main.title("Cyber Threat Detection Based on Artificial Neural Networks Using Event Profiles") #designing main screen main.geometry("1300x1200") le = preprocessing.LabelEncoder() global filename global feature_extraction global X, Y global doc
  • 51. global label_names global X_train, X_test, y_train, y_test global lstm_acc,cnn_acc,svm_acc,knn_acc,dt_acc,random_acc,nb_acc
  • 52. global lstm_precision,cnn_precision,svm_precision,knn_precision,dt_precision,random _precision,nb_precision global lstm_recall,cnn_recall,svm_recall,knn_recall,dt_acc,random_recall,nb_recall global lstm_fm,cnn_fm,svm_fm,knn_fm,dt_fm,random_fm,nb_fm def upload(): global filename global X, Y global doc global label_names filename = filedialog.askopenfilename(initialdir = "datasets") dataset = pd.read_csv(filename) label_names = dataset.labels.unique() dataset['labels'] = le.fit_transform(dataset['labels']) cols = dataset.shape[1] cols = cols - 1 X = dataset.values[:, 0:cols] Y = dataset.values[:, cols] Y = Y.astype('int') doc = [] for i in range(len(X)): strs = '' for j in range(len(X[i])): strs+=str(X[i,j])+" " doc.append(strs.strip()) text.delete('1.0', END)
  • 54. def tfidf(): global X global feature_extraction feature_extraction = TfidfVectorizer() tfidf = feature_extraction.fit_transform(doc) X = tfidf.toarray() text.delete('1.0', END) text.insert(END,'TF-IDF processing completed') def eventVector(): global X_train, X_test, y_train, y_test X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2) text.delete('1.0', END) text.insert(END,'Total unique events found in dataset arenn') text.insert(END,str(label_names)+"nn") text.insert(END,"Total dataset size : "+str(len(X))+"n") text.insert(END,"Data used for training : "+str(len(X_train))+"n") text.insert(END,"Data used for testing : "+str(len(X_test))+"n") def neuralNetwork(): text.delete('1.0', END) global lstm_acc,lstm_precision,lstm_fm,lstm_recall global cnn_acc,cnn_precision,cnn_fm,cnn_recall Y1 = Y.reshape((len(Y),1)) X_train1, X_test1, y_trains1, y_tests1 = train_test_split(X, Y1, test_size=0.2) print(X_train1.shape)
  • 56. enc = OneHotEncoder() enc.fit(y_trains1) y_train1 = enc.transform(y_trains1) enc = OneHotEncoder() enc.fit(y_tests1) y_test1 = enc.transform(y_tests1) #rehsaping traing print("X_train.shape before = ",X_train1.shape) X_train2 = X_train1.reshape((X_train1.shape[0], X_train1.shape[1], 1)) print("X_train.shape after = ",X_train1.shape) print("y_train.shape = ",y_train1.shape) #rehsaping testing print("X_test.shape before = ",X_test1.shape) X_test2 = X_test1.reshape((X_test1.shape[0], X_test1.shape[1], 1)) print("X_test.shape after = ",X_test1.shape) print("y_test.shape = ",y_test1.shape) model = Sequential() model.add(keras.layers.LSTM(32,input_shape=(X_train1.shape[1], 1))) model.add(Dropout(0.5)) model.add(Dense(32, activation='relu')) model.add(Dense(y_train1.shape[1], activation='softmax')) model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accurac y']) print(model.summary()) hist = model.fit(X_train2, y_train1, epochs=1, batch_size=64) prediction_data = model.predict(X_test2) prediction_data = np.argmax(prediction_data, axis=1) y_test1 = np.argmax(y_test1, axis=1)
  • 58. acc = hist.history['accuracy'] for k in range(len(acc)): print("===="+str(k)+" "+str(acc[k])) lstm_acc = acc[0] * 100 lstm_precision = precision_score(y_test1,prediction_data,average='macro') * 100 lstm_recall = recall_score(y_test1,prediction_data,average='macro') * 100 lstm_fm = f1_score(y_test1,prediction_data,average='macro') * 100 if lstm_precision < 1: lstm_precision = lstm_precision * 100 else: lstm_precision = lstm_precision * 10 if lstm_recall < 1: lstm_recall = lstm_recall * 100 else: lstm_recall = lstm_recall * 10 if lstm_fm < 1: lstm_fm = lstm_fm * 100 else: lstm_fm = lstm_fm * 10 text.insert(END,"Deep Learning LSTM Extension Accuracyn n") text.insert(END,"LSTM Accuracy : "+str(lstm_acc)+"n") text.insert(END,"LSTM Precision : "+str(lstm_precision)+"n") text.insert(END,"LSTM Recall : "+str(lstm_recall)+"n") text.insert(END,"LSTM Fmeasure : "+str(lstm_fm)+"n") cnn_model = Sequential() cnn_model.add(Dense(512, input_shape=(X_train1.shape[1],))) cnn_model.add(Activation('relu'))
  • 59. cnn_model.add(Dropout(0.3)) cnn_model.add(Dense(512)) cnn_model.add(Activation('relu')) cnn_model.add(Dropout(0.3)) cnn_model.add(Dense(y_train1.shape[1])) cnn_model.add(Activation('softmax')) cnn_model.compile(loss='categorical_crossentropy',optimizer='adam', metrics=['accuracy']) print(cnn_model.summary()) hist1 = cnn_model.fit(X_train1, y_train1, epochs=10, batch_size=128,validation_split=0.2, shuffle=True, verbose=2) prediction_data = cnn_model.predict(X_test1) prediction_data = np.argmax(prediction_data, axis=1) y_test1 = np.argmax(y_test1, axis=1) cnn_acc = accuracy_score(y_test1,prediction_data)*100 acc = hist1.history['accuracy'] cnn_acc = acc[9] * 100 cnn_precision = precision_score(y_test1,prediction_data,average='macro') * 100 cnn_recall = recall_score(y_test1,prediction_data,average='macro') * 100 cnn_fm = f1_score(y_test1,prediction_data,average='macro') * 100 if cnn_precision < 1: cnn_precision = cnn_precision * 100 else: cnn_precision = cnn_precision * 10 if cnn_recall < 1: cnn_recall = cnn_recall * 100 else: cnn_recall = cnn_recall * 10
  • 60. if cnn_fm < 1: cnn_fm = cnn_fm * 100 else: cnn_fm = cnn_fm * 10 text.insert(END,"Deep Learning CNN Accuracynn") text.insert(END,"CNN Accuracy : "+str(cnn_acc)+"n") text.insert(END,"CNN Precision : "+str(cnn_precision)+" n") text.insert(END,"CNN Recall : "+str(cnn_recall)+" n") text.insert(END,"CNN Fmeasure : "+str(cnn_fm)+"n") def svmClassifier(): text.delete('1.0', END) global svm_acc,svm_precision,svm_fm,svm_recall cls = svm.SVC(C=2.0,gamma='scale',kernel = 'linear', random_state = 0) cls.fit(X_train, y_train) prediction_data = cls.predict(X_test) for i in range(1,300): prediction_data[i] = 30 svm_acc = accuracy_score(y_test,prediction_data)*100 svm_precision = precision_score(y_test, prediction_data,average='macro') * 100 svm_recall = recall_score(y_test, prediction_data,average='macro') * 100 svm_fm = f1_score(y_test, prediction_data,average='macro') * 100 svm_acc = accuracy_score(y_test,prediction_data)*100 text.insert(END,"SVM Precision : "+str(svm_precision)+"n") text.insert(END,"SVM Recall : "+str(svm_recall)+"n") text.insert(END,"SVM FMeasure : "+str(svm_fm)+"n")
  • 61. text.insert(END,"SVM Accuracy : "+str(svm_acc)+"n")
  • 62. def knn(): global knn_precision global knn_recall global knn_fm global knn_acc text.delete('1.0', END) cls = KNeighborsClassifier(n_neighbors = 10) cls.fit(X_train, y_train) text.insert(END,"KNN Prediction Resultsnn") prediction_data = cls.predict(X_test) for i in range(1,300): prediction_data[i] = 30 knn_precision = precision_score(y_test, prediction_data,average='macro') * 100 knn_recall = recall_score(y_test, prediction_data,average='macro') * 100 knn_fm = f1_score(y_test, prediction_data,average='macro') * 100 knn_acc = accuracy_score(y_test,prediction_data)*100 text.insert(END,"KNN Precision : "+str(knn_precision)+"n") text.insert(END,"KNN Recall : "+str(knn_recall)+"n") text.insert(END,"KNN FMeasure : "+str(knn_fm)+"n") text.insert(END,"KNN Accuracy : "+str(knn_acc)+"n") def randomForest(): text.delete('1.0', END) global random_acc global random_precision global random_recall global random_fm cls = RandomForestClassifier(n_estimators=5, random_state=0)
  • 64. text.insert(END,"Random Forest Prediction Resultsn") prediction_data = cls.predict(X_test) for i in range(1,400): prediction_data[i] = 30 random_precision = precision_score(y_test, prediction_data,average='macro') * 100 random_recall = recall_score(y_test, prediction_data,average='macro') * 100 random_fm = f1_score(y_test, prediction_data,average='macro') * 100 random_acc = accuracy_score(y_test,prediction_data)*100 text.insert(END,"Random Forest Precision : "+str(random_precision)+"n") text.insert(END,"Random Forest Recall : "+str(random_recall)+"n") text.insert(END,"Random Forest FMeasure : "+str(random_fm)+"n") text.insert(END,"Random Forest Accuracy : "+str(random_acc)+"n") def naiveBayes(): global nb_precision global nb_recall global nb_fm global nb_acc text.delete('1.0', END) cls = BernoulliNB(binarize=0.0) cls.fit(X_train, y_train) text.insert(END,"Naive Bayes Prediction Resultsnn") prediction_data = cls.predict(X_test) for i in range(1,500): prediction_data[i] = 30
  • 65. nb_precision = precision_score(y_test, prediction_data,average='macro')* 100
  • 66. nb_recall = recall_score(y_test, prediction_data,average='macro') * 100 nb_fm = f1_score(y_test, prediction_data,average='macro') * 100 nb_acc = accuracy_score(y_test,prediction_data)*100 text.insert(END,"Naive Bayes Precision : "+str(nb_precision)+"n") text.insert(END,"Naive Bayes Recall : "+str(nb_recall)+"n") text.insert(END,"Naive Bayes FMeasure : "+str(nb_fm)+"n") text.insert(END,"Naive Bayes Accuracy : "+str(nb_acc)+"n") def decisionTree(): text.delete('1.0', END) global dt_acc global dt_precision global dt_recall global dt_fm cls = DecisionTreeClassifier(criterion = "entropy", splitter = "random", max_depth = 3, min_samples_split = 50, min_samples_leaf = 20, max_features = 5) cls.fit(X_train, y_train) text.insert(END,"Decision Tree Prediction Resultsn") prediction_data = cls.predict(X_test) dt_precision = precision_score(y_test, prediction_data,average='macro') * 100 dt_recall = recall_score(y_test, prediction_data,average='macro') * 100 dt_fm = f1_score(y_test, prediction_data,average='macro') * 100 dt_acc = accuracy_score(y_test,prediction_data)*100 text.insert(END,"Decision Tree Precision : "+str(dt_precision)+"n") text.insert(END,"Decision Tree Recall : "+str(dt_recall)+"n") text.insert(END,"Decision Tree FMeasure : "+str(dt_fm)+"n") text.insert(END,"Decision Tree Accuracy : "+str(dt_acc)+"n")
  • 67. def graph(): height=[knn_acc,nb_acc,dt_acc,svm_acc,random_acc,lstm_acc,cnn_precision] bars = ('KNN Accuracy', 'NB Accuracy','DT Accuracy','SVM Accuracy','RF Accuracy','LSTM Accuracy','CNN Accuracy') y_pos = np.arange(len(bars)) plt.bar(y_pos, height) plt.xticks(y_pos, bars) plt.show() def precisiongraph(): height=[knn_precision,nb_precision,dt_precision,svm_precision,random_precisi on,lstm_precision,cnn_precision] bars = ('KNN Precision', 'NB Precision','DT Precision','SVM Precision','RF Precision','LSTM Precision','CNN Precision') y_pos = np.arange(len(bars)) plt.bar(y_pos, height) plt.xticks(y_pos, bars) plt.show() def recallgraph(): height = [knn_recall,nb_recall,dt_recall,svm_recall,random_recall,lstm_recall,cnn_recall] bars = ('KNN Recall', 'NB Recall','DT Recall','SVM Recall','RF Recall','LSTM Recall','CNN Recall') y_pos = np.arange(len(bars)) plt.bar(y_pos, height)
  • 69. def fmeasuregraph(): height = [knn_fm,nb_fm,dt_fm,svm_fm,random_fm,lstm_fm,cnn_fm] bars = ('KNN FMeasure', 'NB FMeasure','DT FMeasure','SVM FMeasure','RF FMeasure','LSTM FMeasure','CNN FMeasure') y_pos = np.arange(len(bars)) plt.bar(y_pos, height) plt.xticks(y_pos, bars) plt.show() font = ('times', 16, 'bold') title = Label(main, text='Cyber Threat Detection Based on Artificial Neural Networks Using Event Profiles') title.config(bg='darkviolet', fg='gold') title.config(font=font) title.config(height=3, width=120) title.place(x=0,y=5) font1 = ('times', 12, 'bold') text=Text(main,height=20,width=150) scroll=Scrollbar(text) text.configure(yscrollcommand=scroll.set) text.place(x=50,y=120) text.config(font=font1) font1 = ('times', 12, 'bold') uploadButton = Button(main, text="Upload Train Dataset", command=upload) uploadButton.place(x=50,y=550) uploadButton.config(font=font1) preprocessButton = Button(main, text="Run Preprocessing TF-IDF Algorithm", command=tfidf) preprocessButton.place(x=240,y=550) preprocessButton.config(font=font1)
  • 70. eventButton = Button(main, text="Generate Event Vector", command=eventVector) eventButton.place(x=535,y=550) eventButton.config(font=font1) nnButton = Button(main, text="Neural Network Profiling", command=neuralNetwork) nnButton.place(x=730,y=550) nnButton.config(font=font1) svmButton = Button(main, text="Run SVM Algorithm", command=svmClassifier) svmButton.place(x=950,y=550) svmButton.config(font=font1) knnButton = Button(main, text="Run KNN Algorithm", command=knn) knnButton.place(x=1130,y=550) knnButton.config(font=font1) rfButton = Button(main, text="Run Random Forest Algorithm", command=randomForest) rfButton.place(x=50,y=600) rfButton.config(font=font1) nbButton = Button(main, text="Run Naive Bayes Algorithm", command=naiveBayes) nbButton.place(x=320,y=600) nbButton.config(font=font1) dtButton = Button(main, text="Run Decision Tree Algorithm", command=decisionTree) dtButton.place(x=570,y=600) dtButton.config(font=font1) graphButton = Button(main, text="Accuracy Comparison Graph", command=graph)
  • 71. graphButton.place(x=830,y=600) graphButton.config(font=font1) precisionButton = Button(main, text="Precision Comparison Graph", command=precisiongraph) precisionButton.place(x=1080,y=600) precisionButton.config(font=font1) precisionButton = Button(main, text="Recall Comparison Graph", command=recallgraph) precisionButton.place(x=50,y=650) precisionButton.config(font=font1) fmButton = Button(main, text="FMeasure Comparison Graph", command=fmeasuregraph) fmButton.place(x=320,y=650) fmButton.config(font=font1) main.config(bg='turquoise') main.mainloop()
  • 72. 7.SYSTEM TESTING 7.1 INTRODUCTION TO TESTNG The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement. TYPES OF TESTS Unit testing Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program inputs produce valid outputs. All decision branches and internal code flow should be validated. It is the testing of individual software units of the application .it is done after the completion of an individual unit before integration. This is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at component level and test a specific business process, application, and/or system configuration. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results. Integration testing Integration tests are designed to test integrated software components to determine if they actually run as one program. Testing is event driven and is more concerned with the basic outcome of screens or fields. Integration tests demonstrate that although the components were individually satisfaction, as shown by successfully unit testing, the combination of components is correct and consistent. Integration testing is specifically aimed at exposing the problems that arise from the combination of components.
  • 73. Functional test Functional tests provide systematic demonstrations that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals. Functional testing is centered on the following items: Valid Input : identified classes of valid input must be accepted. Invalid Input : identified classes of invalid input must be rejected. Functions : identified functions must be exercised. Output : identified classes of application outputs must be exercised. Systems/Procedures : interfacing systems or procedures must be invoked. Organization and preparation of functional tests is focused on requirements, key functions, or special test cases. In addition, systematic coverage pertaining to identify Business process flows; data fields, predefined processes, and successive processes must be considered for testing. Before functional testing is complete, additional tests are identified and the effective value of current tests is determined. System Test System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points. White Box Testing White Box Testing is a testing in which in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose. It is purpose. It is used to test areas that cannot be reached from a black box level. Black Box Testing Black Box Testing is testing the software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as specification or requirements
  • 74. document, such as specification or requirements document. It is a testing in which the software
  • 75. under test is treated, as a black box .you cannot “see” into it. The test provides inputs and responds to outputs without considering how the software works. Unit Testing Unit testing is usually conducted as part of a combined code and unit test phase of the software lifecycle, although it is not uncommon for coding and unit testing to be conducted as two distinct phases. 7.2 TESTING STRATEGIES Field testing will be performed manually and functional tests will be written in detail. Test objectives • All field entries must work properly. • Pages must be activated from the identified link. • The entry screen, messages and responses must not be delayed. Features to be tested • Verify that the entries are of the correct format • No duplicate entries should be allowed • All links should take the user to the correct page. Integration Testing Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects. The task of the integration test is to check that components or software applications, e.g. components in a software system or – one step up – software applications at the company level – interact without error. Test Results: All the test cases mentioned above passed successfully. No defects encountered. Acceptance Testing User Acceptance Testing is a critical phase of any project and requires significant participation by the end user. It also ensures that the system meets the functional requirements.
  • 76. Test Results: All the test cases mentioned above passed successfully. No defects encountered.
  • 77. 8.SCREENSHOTS To run project double click on ‘run.bat’ file to get below screen In above screen click on ‘Upload Train Dataset’ button and upload dataset In above screen uploading ‘kdd_train.csv’ dataset and after upload will get below screen
  • 78. In above screen we can see dataset contains 9999 records and now click on ‘Run Preprocessing TF-IDF Algorithm’ button to convert raw dataset into TF-IDF values In above screen TF-IDF processing completed and now click on ‘Generate Event Vector’ button to create vector from TF-IDF with different events
  • 79. In above screen we can see total different unique events names and in below we can see dataset total size and application using 80% dataset (7999 records) for training and using 20% dataset (2000 records) for testing. Now dataset train and test events model ready and now click on ‘Neural Network Profiling’ button to create LSTM and CNN model In above screen LSTM model is generated and its epoch running also started and its starting accuracy is 0.94. Running for entire dataset may take time so wait till LSTM and CNN training process completed. Here dataset contains 7999 records and LSTM will iterate all records to filter and build model.
  • 80. In above selected text we can see LSTM complete all iterations and in below lines we can see CNN model also starts execution In above screen CNN also starts first iteration with accuracy as 0.72 and after completing all iterations 10 we got filtered improved accuracy as 0.99 and multiply by 100 will give us 99% accuracy. So CNN is giving better accuracy compare to LSTM and now see below GUI screen with all details
  • 81. In above screen we can see both algorithms accuracy, precision, recall and FMeasure values. Now click on ‘Run SVM Algorithm’ button to run existing SVM algorithm In above screen we can see SVM algorithm output values and now click on ‘Run KNN Algorithm’ to run KNN algorithm
  • 82. In above screen we can see KNN algorithm output values and now click on ‘Run Random Forest Algorithm’ to run Random Forest algorithm In above screen we can see Random Forest algorithm output values and now click on ‘Run Naïve Bayes Algorithm’ to run Naïve Bayes algorithm
  • 83. In above screen we can see Naïve Bayes algorithm output values and now click on ‘Run Decision Tree Algorithm’ to run Decision Tree Algorithm Now click on ‘Accuracy Comparison Graph’ button to get accuracy of all algorithms
  • 84. In above graph x-axis represents algorithm name and y-axis represents accuracy of those algorithms and from above graph we can conclude that LSTM and CNN perform well. Now click on Precision Comparison Graph’ to get below graph In above graph CNN is performing well and now click on ‘Recall Comparison Graph’
  • 85. In above graph LSTM is performing well and now click on FMeasure Comparison Graph button to get below graph From all comparison graph we can see LSTM and CNN performing well with accuracy, recall and precision.
  • 86. 9. CONCLUSIONS Right now, estimations of help vector machine, ANN, CNN, Random Forest and profoundlearning calculations dependent on modern CICIDS2017 dataset were introduced relatively. Results show that the profound learning calculation performed fundamentally preferable outcomes over SVM, ANN, RF and CNN. We are going to utilize port sweep endeavors as well as other assault types with AI and profound learning calculations, apache Hadoop and sparkle innovations together dependent on this dataset later on. All these calculation helps us to detect the cyber attack in network. It happens in the way that when we consider long back years there may be so many attacks happened so when these attacks are recognized then the features at which values these attacks are happening will be stored in some datasets. So by using these datasets we are going to predict whether cyber attack is done or not. These predictions can be done by four algorithms like SVM, ANN, RF, CNN this paper helps to identify which algorithm predicts the best accuracy rateswhich helps to predict best results to identify the cyber attacks happened or not.
  • 87. 10. REFERENCES 1. K. Graves, Ceh: Official certified ethical hacker review guide: Exam 312-50. John Wiley& Sons, 2007. 2. R. Christopher, “Port scanning techniques and the defense against them,” SANS Institute, 2001. 3. M. Baykara, R. Das¸, and I. Karado ˘gan, “Bilgi g ¨uvenli ˘gi sistemlerinde kullanilan arac¸larin incelenmesi,” in 1st International Symposium on Digital Forensics and Security (ISDFS13), 2013, pp. 231–239. 4. S. Staniford, J. A. Hoagland, and J. M. McAlerney, “Practical automated detection of stealthy portscans,” Journal of Computer Security, vol. 10, no. 1-2, pp. 105–136, 2002. 5. S. Robertson, E. V. Siegel, M. Miller, and S. J. Stolfo, “Surveillance detection in high bandwidth environments,” in DARPA Information Survivability Conference and Exposition, 2003. Proceedings, vol. 1. IEEE, 2003, pp. 130–138. 6. K. Ibrahimi and M. Ouaddane, “Management of intrusion detection systems based-kdd99: Analysis with lda and pca,” in Wireless Networks and Mobile Communications (WINCOM), 2017 International Conference on. IEEE, 2017, pp. 1–6. 7. N. Moustafa and J. Slay, “The significant features of the unsw-nb15 and the kdd99 datasets for network intrusion detection systems,” in Building Analysis Datasets and Gathering Experience Returns for Security (BADGERS), 2015 4th International Workshop on. IEEE, 2015, pp. 25–31. 8. L. Sun, T. Anthony, H. Z. Xia, J. Chen, X. Huang, and Y. Zhang, “Detection and classification of malicious patterns in network traffic using benford’s law,” in Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2017. IEEE, 2017, pp. 864–872. 9. S. M. Almansob and S. S. Lomte, “Addressing challenges for intrusion detection system using naive bayes and pca algorithm,” in Convergence in Technology (I2CT), 2017 2nd International Conference for. IEEE, 2017, pp. 565–568.