SlideShare a Scribd company logo
Unit-1- Introduction to Programming Paradigm
Programming Languages – Elements of Programming languages -
Programming Language Theory - Bohm- Jacopini structured program
theorem - Multiple Programming Paradigm – Programming Paradigm
hierarchy – Imperative Paradigm: Procedural, Object-Oriented and
Parallel processing – Declarative programming paradigm: Logic,
Functional and Database processing - Machine Codes – Procedural and
Object-Oriented Programming – Suitability of Multiple paradigms in the
programming language - Subroutine, method call overhead and Dynamic
memory allocation for message and object storage - Dynamically
dispatched message calls and direct procedure call overheads – Object
Serialization – parallel Computing
Introduction to Programming Languages
What is a Programming Language?
•A programming language is a formal set of instructions that can be used to produce various kinds of
output. Programming languages are used to create software that controls the behavior of a machine,
particularly a computer.
•Purpose of Programming Languages
•Programming languages allow humans to communicate with machines in a way that is both
understandable to the machine and accessible to the human programmer. They bridge the gap between
human logic and machine instructions.
•Types of Programming Languages
•Low-Level Languages: These are closer to machine language (binary code) and include Assembly
Language. They provide more control over hardware but are more difficult to write and understand.
•High-Level Languages: These are closer to human languages and include languages like Python,
Java, and C++. They are easier to write, read, and maintain.
•Domain-Specific Languages: These are specialized for particular tasks, such as SQL for database
queries or HTML for web development.
Object-Oriented Languages: These languages emphasize the concept of objects,
which encapsulate both data and the functions (methods) that operate on that data.
Examples include Java, C++, C#, and Python.
Functional Languages: These languages treat computation as the evaluation of
mathematical functions and avoid changing state or mutable data. Examples
include Haskell, Lisp, and Erlang.
Domain-Specific Languages (DSLs): These languages are designed for specific
domains or problem areas, with specialized syntax and features tailored to those
domains. Examples include SQL for database management, HTML/CSS for web
development, and MATLAB for numerical computing.
Programming languages have different strengths and weaknesses, and
developers choose a language based on factors such as project requirements,
performance needs, development speed, community support, and personal
preference. Learning multiple languages can give programmers flexibility and allow
them to solve different types of problems more effectively.
Categories of Programming Languages
•Imperative Languages: These languages focus on how to execute tasks by specifying step-by-step
instructions (e.g., C, Java).
•Functional Languages: These emphasize the evaluation of functions and avoid changing state or
mutable data (e.g., Haskell, Lisp).
•Object-Oriented Languages: These languages use objects and classes to organize code in a way
that models real-world entities (e.g., Java, C++).
•Logic Languages: These are based on formal logic and allow programmers to declare what they
want rather than how to achieve it (e.g., Prolog).
•History and Evolution
•The first programming languages were developed in the early 1950s, with Assembly Language being
among the first. Over time, languages evolved to be more abstract and user-friendly, leading to the
modern languages we use today.
•Significance of Programming Languages
•Programming languages are crucial for the development of software, which runs almost every aspect
of modern life—from operating systems to applications, from websites to embedded systems in
devices.
Programming Language Theory
Formal Languages: Programming languages are often defined by formal
languages, which use formal grammar to describe the syntax and structure of
programs.
•Type Systems: A type system is a set of rules that assigns types to various
program constructs, such as variables, expressions, functions, and modules.
It ensures that operations are performed on compatible data types.
•Language Paradigms: These are different styles of programming, such as
imperative, functional, logic, and object-oriented paradigms. Each paradigm
offers different approaches to solving programming problems.
•Compiler Theory: This deals with how high-level programming languages
are translated into machine code that a computer can execute.
•Abstract Interpretation: A method used to analyze programs by simplifying
and approximating their behaviors, which helps in understanding program
properties and potential errors.
Elements of Programming Languages
•Syntax: Refers to the rules that define the structure of a programming language. It includes how symbols,
keywords, and punctuation are arranged to form valid programs. Syntax is like the grammar of a language.
•Semantics: Describes the meaning of syntactically valid programs. It defines what the program does when it
runs. For example, the semantics of an if statement is that it executes a block of code only if a certain condition is true.
•Pragmatics: Focuses on how programming language features are used in practice, considering aspects like
efficiency, readability, and ease of use.
•Types: Programming languages use types to classify data and determine what kind of operations can be
performed on that data. For example, integers, floats, and strings are different types in many languages.
•Variables and Scope: Variables are names given to data that can change over time. The scope of a variable
defines where it can be accessed in the program.
•Control Structures: These are constructs that dictate the flow of control in a program, such as loops (for, while),
conditionals (if, else), and branches (switch).
•Functions and Procedures: Functions (or procedures) are reusable blocks of code designed to perform a
specific task. They allow for code modularity and reduce repetition.
•Data Structures: These are ways to organize and store data in a program, such as arrays, lists, stacks, queues,
and trees.
Bohm-Jacopini Structured Program Theorem
Overview: The Bohm-Jacopini theorem, proposed by Corrado Böhm and Giuseppe Jacopini in
1966, is a fundamental result in programming language theory. It states that any computable
function can be implemented using just three control structures: sequence, selection (branching),
and iteration (loops).
Significance: This theorem laid the groundwork for structured programming, which advocates for
the use of these control structures to create clear, understandable, and maintainable programs. It
implies that "goto" statements, which were common in early programming, are unnecessary and
can lead to "spaghetti code."
Control Structures Defined:
•Sequence: The execution of statements in a linear order, one after another.
•Selection (Branching): Making decisions in a program using conditional statements like if-
else.
•Iteration (Looping): Repeating a block of code multiple times using loops like for, while,
and do-while.
•Structured Programming: The practice of structuring programs using these three control
structures to enhance readability and reduce complexity.
java programming for students UNIT 1.pptx
java programming for students UNIT 1.pptx
java programming for students UNIT 1.pptx
java programming for students UNIT 1.pptx
java programming for students UNIT 1.pptx
java programming for students UNIT 1.pptx
java programming for students UNIT 1.pptx
java programming for students UNIT 1.pptx
java programming for students UNIT 1.pptx
java programming for students UNIT 1.pptx
java programming for students UNIT 1.pptx
Machine Codes
• Definition:
• Machine code, or machine language, is the lowest-level programming language, consisting of
binary digits (0s and 1s) that the computer's central processing unit (CPU) can execute directly.
• Characteristics:
• Binary Format: Machine code is written in binary, which makes it difficult for humans to read
and write.
• Hardware-Specific: Machine code is specific to a computer's architecture, meaning that code
written for one type of CPU won't necessarily run on another.
• Fast Execution: Programs written in machine code are executed directly by the hardware,
making them extremely fast but difficult to debug and maintain.
• Use Cases:
• Machine code is primarily used in low-level programming, such as writing operating system
kernels, firmware, and device drivers.
Procedural Programming
•Definition:
•Procedural programming is a programming paradigm based on the concept of procedure calls, where
procedures, also known as functions, are a sequence of instructions that perform a specific task.
•Key Concepts:
•Functions/Procedures: Central to procedural programming, functions encapsulate reusable blocks of code.
•Sequence, Selection, Iteration: Control structures in procedural programming include sequences of
instructions, conditional statements (like if-else), and loops (like for, while).
•Global and Local Variables: Variables can be defined globally (accessible throughout the program) or locally
within a function.
•Advantages:
•Simplicity: Procedural programming is straightforward and easy to understand, especially for small programs.
•Reusability: Functions can be reused across different parts of a program, reducing redundancy.
•Limitations:
•Scalability: Procedural programming can become unwieldy in large projects, as it doesn’t naturally support
concepts like data encapsulation.
•Maintenance: Managing and modifying large procedural codebases can be challenging as they grow in
complexity.
•Examples:
•Languages like C, Pascal, and BASIC are well-known for procedural programming.
Object-Oriented Programming (OOP)
•Definition:
•Object-Oriented Programming is a paradigm that uses "objects"—which are instances of classes—
to design and structure software. It models real-world entities using objects that encapsulate both
data and behavior.
•Key Concepts:
•Classes and Objects: A class is a blueprint for creating objects. An object is an instance of a class
containing attributes (data) and methods (functions).
•Encapsulation: Bundling data and methods that operate on the data within a single unit, or class,
and restricting access to some of the object’s components.
•Inheritance: The ability to create a new class based on an existing class, inheriting attributes and
behaviors from the parent class.
•Polymorphism: The ability to process objects differently depending on their data type or class. For
example, the same method name can be used in different classes.
•Abstraction: The concept of hiding complex implementation details and showing only the essential
features of an object.
Advantages:
Modularity: OOP helps in organizing code into manageable sections or classes.
Code Reusability: Inheritance allows for code reuse across multiple classes.
Maintainability: The modularity of OOP makes it easier to update and maintain code.
Scalability: OOP naturally supports more complex and scalable systems compared to
procedural programming.
Limitations:
Complexity: OOP can be more complex to learn and implement, especially for
beginners.
Overhead: The abstraction layers in OOP may introduce overhead, which can affect
performance.
Examples:
Java, C++, Python, and Ruby are popular object-oriented languages.
Direct Procedure Call:
• What it is: In a direct procedure call, the method to be invoked is
determined at compile-time. This is typical in languages with static
binding.
• Overhead: The overhead is minimal because the address of the
method is known at compile-time, allowing for efficient calls. There's
no need to determine which method to execute at runtime.
Dynamically Dispatched Message Calls:
• What it is: In dynamically dispatched calls, the method to be invoked
is determined at runtime. This is typical in object-oriented languages
with dynamic binding (e.g., polymorphism in Java, C++).
• Overhead: The overhead is higher than in direct calls because the
runtime system must determine the appropriate method to invoke. This
typically involves looking up a method in a table (such as a vtable in
C++), which adds additional processing time.
Key Differences:
1.Method Resolution:
1. Direct Procedure Call: The method is resolved at compile-time.
2. Dynamically Dispatched Call: The method is resolved at runtime.
2.Efficiency:
1. Direct Procedure Call: More efficient due to compile-time resolution.
2. Dynamically Dispatched Call: Less efficient because of the additional
runtime lookup.
Direct Procedure Call:
class A {
void display() {
System.out.println("A's display");
}
}
public class Main {
public static void main(String[] args) {
A a = new A();
a.display(); // Direct call to A's display method.
}
}
class A {
void display() {
System.out.println("A's display");
}
}
class B extends A {
void display() {
System.out.println("B's display");
}
}
public class Main {
public static void main(String[] args) {
A a = new B(); a.display(); // Dynamically dispatched call to B's display method.
}
}
Dynamically Dispatched Message Calls
Suitability of Multiple Paradigms in
Programming Languages
•Definition:
•Multi-paradigm programming languages support more than one programming
paradigm, allowing developers to choose the best approach for solving a particular
problem.
•Advantages of Multi-Paradigm Languages:
•Flexibility: Developers can choose the most appropriate paradigm for each part of the
application. For example, procedural code might be used for simple tasks, while OOP is
used for more complex structures.
•Enhanced Problem-Solving: Different paradigms can offer different perspectives on
problem-solving. For example, functional programming emphasizes immutability and
pure functions, which can be beneficial for parallel processing.
•Code Reusability: Code written in different paradigms can be reused across different
parts of the application, promoting DRY (Don’t Repeat Yourself) principles.
Examples of Multi-Paradigm Languages:
•Python: Supports procedural, object-oriented, and functional programming
paradigms.
•C++: Primarily known for its object-oriented capabilities, but also supports
procedural and generic programming.
•JavaScript: Supports procedural, object-oriented, and functional
programming, making it versatile for different use cases.
•Suitability:
•Multi-paradigm languages are particularly suitable for large, complex projects
where different parts of the application may benefit from different
programming paradigms. They offer flexibility and adaptability, allowing
developers to leverage the strengths of each paradigm to create robust,
efficient, and maintainable software solutions.
. Java Serialization is the mechanism in which the state of an
object is converted to a byte stream so that the byte stream
can be reverted back into the copy of the object written into
a file or stored into a database. It includes information
about object’s type and types of data stored in the object.
Once the object is serialized into written into a file, it can be
read from the file and deserialized which means that the
type of information and bytes that represent the object can
be utilized to create object in the memory
Serialization in Java
Copyright @ 2015 Learntek. All Rights Reserved. 31
Let’s understand streams in computer systems before
proceeding further. A stream is simply a sequence of data
elements. Data in the form of streams is generated from a
source and consumed at a destination.
Different data streams in the computer systems are:
1. Byte Stream – It is a low level I/O operation and does not
have any encoding scheme. The Java program should have a
buffered approach for I/O operations to process Byte Stream.
Copyright @ 2015 Learntek. All Rights Reserved. 32
2. Data Stream – Data Stream allows to read-write primitive
data types and used to perform binary I/O operations on
primitive data types. I/O operations can be performed for
byte, char, boolean, short, int, long, float, double and strings
efficiently and conveniently.
3. Character Stream – The character stream can easily
translate to and from local character set unlike byte stream.
It has a proper encoding scheme such as UNICODE, ASCII
and is composed of streams of characters. In Java UNICODE
system is followed to store characters as character stream.
Copyright @ 2015 Learntek. All Rights Reserved. 33
4. Object Stream – Object stream can covert the state of an object
into a byte stream so that it can be stored into a database, file or
transported to any location (Serialization) and used at a later point
of time for retrieving the stored values and restoring the old state
of the object.
The process of serialization is instance independent which means
an object can be serialized on one platform and deserialized on an
entirely different platform. To make an object serializable we use
java.io.Serializable interface.
java programming for students UNIT 1.pptx
import java.io.*;
class SerializationExample
{
public static void main(String args[])
{
try
{
Employee emp =new Employee(1187345,"Andrew"); //Creating the object
FileOutputStream fout=new FileOutputStream("employee data.txt"); //Creating stream and writing the object
ObjectOutputStream out=new ObjectOutputStream(fout);
out.writeObject(emp);
out.flush();
out.close(); //closing the stream
System.out.println("Data has been read from the file.");
}
catch(Exception e)
{
e.printStackTrace();
}
}
import java.io.*;
class DeserializationExample
{
public static void main(String args[])
{
try
{
//Creating stream to read the object
ObjectInputStream in=new ObjectInputStream(new FileInputStream("employee data.txt"));
Employee emp=(Employee)in.readObject();
//printing the data of the serialized object
System.out.println(emp.empid+" "+emp.empname);
//closing the stream
in.close();
}
catch(Exception e)
{
e.printStackTrace();
}
}
}
parallel Computing
• Parallel computing refers to the use of multiple processors or computing resources to solve
a computational problem or perform a task simultaneously.
• It involves breaking down a problem into smaller parts that can be solved concurrently or in
parallel, thus achieving faster execution and increased computational power.
• Parallel computing can be applied to various types of problems, ranging from
computationally intensive scientific simulations and data analysis to web servers handling
multiple requests simultaneously.
• It is particularly beneficial for tasks that can be divided into independent subtasks that can
be executed concurrently.
• There are different models and approaches to parallel computing:
Task Parallelism:
• In task parallelism, the problem is divided into multiple independent tasks or subtasks that can
be executed concurrently.
• Each task is assigned to a separate processing unit or thread, allowing multiple tasks to be
processed simultaneously.
• Task parallelism is well-suited for irregular or dynamic problems where the execution time of
each task may vary.
Data Parallelism:
• Data parallelism involves dividing the data into smaller chunks and processing them simultaneously on
different processing units.
• Each unit operates on its portion of the data, typically applying the same computation or algorithm to each
chunk.
•Data parallelism is commonly used in scientific simulations, image processing, and numerical
computations. Message Passing:
• Message passing involves dividing the problem into smaller tasks that communicate and exchange data by
sending messages to each other.
• Each task operates independently and exchanges information with other tasks as needed.
• This approach is commonly used in distributed systems and parallel computing frameworks such as MPI
(Message Passing Interface).
Shared Memory:
Shared memory parallelism involves multiple processors or threads accessing and modifying a shared memory
space.
This model allows parallel tasks to communicate and synchronize by reading and writing to shared memory
locations.
Programming models such as OpenMP and Pthreads utilize shared memory parallelism.
•
•Task Parallelism: Different tasks, done simultaneously.
•Data Parallelism: The same task, done on different parts.
•Message Passing: Different tasks, done in different places, requiring
commnication.
•Shared Memory: Different tasks, done together in the same place,
sharing resources.
• Parallel computing offers several benefits, including:
Increased speed:
• By dividing the problem into smaller parts and executing them
simultaneously, parallel computing can significantly reduce the overall
execution time and achieve faster results.
Enhanced scalability:
• Parallel computing allows for the efficient utilization of multiple processing
units or resources, enabling systems to scale and handle larger workloads.
Improved performance:
• Parallel computing enables the execution of complex computations and
simulations that would otherwise be infeasible or take an impractical amount
of time with sequential processing.
• However, parallel computing also introduces challenges such as load balancing,
data synchronization, and communication overhead.
• Proper design and optimization techniques are essential to ensure efficient and
effective parallel execution.
• Overall, parallel computing is a powerful approach for achieving high-
performance computing and tackling complex problems by harnessing the
capabilities of multiple processing units or resources.
• It plays a crucial role in various domains, including scientific research, data
analysis, artificial intelligence,and large-scale computing systems

More Related Content

PPTX
Advanced Programming practices - UNIT 1 .pptx
PDF
Programming Languages Categories / Programming Paradigm By: Prof. Lili Saghafi
PDF
Advanced Programming Paradigm Introduction.pdf
PPTX
Chapter 1
PPTX
Unit 1_Evaluation Criteria_session 3.pptx
PPTX
Software engineering topics,coding phase in sdlc
PPTX
Define Computer language, Translator, Standard input out C
PDF
Introduction
Advanced Programming practices - UNIT 1 .pptx
Programming Languages Categories / Programming Paradigm By: Prof. Lili Saghafi
Advanced Programming Paradigm Introduction.pdf
Chapter 1
Unit 1_Evaluation Criteria_session 3.pptx
Software engineering topics,coding phase in sdlc
Define Computer language, Translator, Standard input out C
Introduction

Similar to java programming for students UNIT 1.pptx (20)

PPTX
Preliminary Concepts in principlesofprogramming.pptx
PPTX
Principlesofprogramminglanguage concepts.pptx
PPTX
Prgramming paradigms
PPTX
APP_All Five Unit PPT_NOTES.pptx
PPTX
Principles of Intro to Programming Languages
PPTX
Lec 1 Introduction to Programming Concepts.pptx
PDF
computer-science_engineering_principles-of-programming-languages_introduction...
PPT
Python and principle of programming language.ppt
PPT
Programming Language Introduction Lecture
PPT
Programming Language Introduction Lecture
PPT
software principle programming language
PPTX
Principles of Compiler Design - Introduction
PPTX
Ch1 language design issue
PPTX
Oop.pptx
PPTX
01-PROGRAMMING introA of the class name. Pptx
PPTX
Unit1 principle of programming language
PPT
Ppl 13 july2019
PPTX
Presentation-1.pptx
PPTX
Python-unit -I.pptx
Preliminary Concepts in principlesofprogramming.pptx
Principlesofprogramminglanguage concepts.pptx
Prgramming paradigms
APP_All Five Unit PPT_NOTES.pptx
Principles of Intro to Programming Languages
Lec 1 Introduction to Programming Concepts.pptx
computer-science_engineering_principles-of-programming-languages_introduction...
Python and principle of programming language.ppt
Programming Language Introduction Lecture
Programming Language Introduction Lecture
software principle programming language
Principles of Compiler Design - Introduction
Ch1 language design issue
Oop.pptx
01-PROGRAMMING introA of the class name. Pptx
Unit1 principle of programming language
Ppl 13 july2019
Presentation-1.pptx
Python-unit -I.pptx
Ad

Recently uploaded (20)

PPTX
bas. eng. economics group 4 presentation 1.pptx
PPTX
Geodesy 1.pptx...............................................
PPTX
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PDF
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
CH1 Production IntroductoryConcepts.pptx
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PPT
Mechanical Engineering MATERIALS Selection
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PPTX
Lecture Notes Electrical Wiring System Components
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PDF
PPT on Performance Review to get promotions
PPTX
OOP with Java - Java Introduction (Basics)
PDF
Well-logging-methods_new................
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PPT
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
bas. eng. economics group 4 presentation 1.pptx
Geodesy 1.pptx...............................................
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
CYBER-CRIMES AND SECURITY A guide to understanding
CH1 Production IntroductoryConcepts.pptx
Automation-in-Manufacturing-Chapter-Introduction.pdf
Mechanical Engineering MATERIALS Selection
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
Operating System & Kernel Study Guide-1 - converted.pdf
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
Lecture Notes Electrical Wiring System Components
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPT on Performance Review to get promotions
OOP with Java - Java Introduction (Basics)
Well-logging-methods_new................
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
Ad

java programming for students UNIT 1.pptx

  • 1. Unit-1- Introduction to Programming Paradigm Programming Languages – Elements of Programming languages - Programming Language Theory - Bohm- Jacopini structured program theorem - Multiple Programming Paradigm – Programming Paradigm hierarchy – Imperative Paradigm: Procedural, Object-Oriented and Parallel processing – Declarative programming paradigm: Logic, Functional and Database processing - Machine Codes – Procedural and Object-Oriented Programming – Suitability of Multiple paradigms in the programming language - Subroutine, method call overhead and Dynamic memory allocation for message and object storage - Dynamically dispatched message calls and direct procedure call overheads – Object Serialization – parallel Computing
  • 2. Introduction to Programming Languages What is a Programming Language? •A programming language is a formal set of instructions that can be used to produce various kinds of output. Programming languages are used to create software that controls the behavior of a machine, particularly a computer. •Purpose of Programming Languages •Programming languages allow humans to communicate with machines in a way that is both understandable to the machine and accessible to the human programmer. They bridge the gap between human logic and machine instructions. •Types of Programming Languages •Low-Level Languages: These are closer to machine language (binary code) and include Assembly Language. They provide more control over hardware but are more difficult to write and understand. •High-Level Languages: These are closer to human languages and include languages like Python, Java, and C++. They are easier to write, read, and maintain. •Domain-Specific Languages: These are specialized for particular tasks, such as SQL for database queries or HTML for web development.
  • 3. Object-Oriented Languages: These languages emphasize the concept of objects, which encapsulate both data and the functions (methods) that operate on that data. Examples include Java, C++, C#, and Python. Functional Languages: These languages treat computation as the evaluation of mathematical functions and avoid changing state or mutable data. Examples include Haskell, Lisp, and Erlang. Domain-Specific Languages (DSLs): These languages are designed for specific domains or problem areas, with specialized syntax and features tailored to those domains. Examples include SQL for database management, HTML/CSS for web development, and MATLAB for numerical computing. Programming languages have different strengths and weaknesses, and developers choose a language based on factors such as project requirements, performance needs, development speed, community support, and personal preference. Learning multiple languages can give programmers flexibility and allow them to solve different types of problems more effectively.
  • 4. Categories of Programming Languages •Imperative Languages: These languages focus on how to execute tasks by specifying step-by-step instructions (e.g., C, Java). •Functional Languages: These emphasize the evaluation of functions and avoid changing state or mutable data (e.g., Haskell, Lisp). •Object-Oriented Languages: These languages use objects and classes to organize code in a way that models real-world entities (e.g., Java, C++). •Logic Languages: These are based on formal logic and allow programmers to declare what they want rather than how to achieve it (e.g., Prolog). •History and Evolution •The first programming languages were developed in the early 1950s, with Assembly Language being among the first. Over time, languages evolved to be more abstract and user-friendly, leading to the modern languages we use today. •Significance of Programming Languages •Programming languages are crucial for the development of software, which runs almost every aspect of modern life—from operating systems to applications, from websites to embedded systems in devices.
  • 5. Programming Language Theory Formal Languages: Programming languages are often defined by formal languages, which use formal grammar to describe the syntax and structure of programs. •Type Systems: A type system is a set of rules that assigns types to various program constructs, such as variables, expressions, functions, and modules. It ensures that operations are performed on compatible data types. •Language Paradigms: These are different styles of programming, such as imperative, functional, logic, and object-oriented paradigms. Each paradigm offers different approaches to solving programming problems. •Compiler Theory: This deals with how high-level programming languages are translated into machine code that a computer can execute. •Abstract Interpretation: A method used to analyze programs by simplifying and approximating their behaviors, which helps in understanding program properties and potential errors.
  • 6. Elements of Programming Languages •Syntax: Refers to the rules that define the structure of a programming language. It includes how symbols, keywords, and punctuation are arranged to form valid programs. Syntax is like the grammar of a language. •Semantics: Describes the meaning of syntactically valid programs. It defines what the program does when it runs. For example, the semantics of an if statement is that it executes a block of code only if a certain condition is true. •Pragmatics: Focuses on how programming language features are used in practice, considering aspects like efficiency, readability, and ease of use. •Types: Programming languages use types to classify data and determine what kind of operations can be performed on that data. For example, integers, floats, and strings are different types in many languages. •Variables and Scope: Variables are names given to data that can change over time. The scope of a variable defines where it can be accessed in the program. •Control Structures: These are constructs that dictate the flow of control in a program, such as loops (for, while), conditionals (if, else), and branches (switch). •Functions and Procedures: Functions (or procedures) are reusable blocks of code designed to perform a specific task. They allow for code modularity and reduce repetition. •Data Structures: These are ways to organize and store data in a program, such as arrays, lists, stacks, queues, and trees.
  • 7. Bohm-Jacopini Structured Program Theorem Overview: The Bohm-Jacopini theorem, proposed by Corrado Böhm and Giuseppe Jacopini in 1966, is a fundamental result in programming language theory. It states that any computable function can be implemented using just three control structures: sequence, selection (branching), and iteration (loops). Significance: This theorem laid the groundwork for structured programming, which advocates for the use of these control structures to create clear, understandable, and maintainable programs. It implies that "goto" statements, which were common in early programming, are unnecessary and can lead to "spaghetti code." Control Structures Defined: •Sequence: The execution of statements in a linear order, one after another. •Selection (Branching): Making decisions in a program using conditional statements like if- else. •Iteration (Looping): Repeating a block of code multiple times using loops like for, while, and do-while. •Structured Programming: The practice of structuring programs using these three control structures to enhance readability and reduce complexity.
  • 19. Machine Codes • Definition: • Machine code, or machine language, is the lowest-level programming language, consisting of binary digits (0s and 1s) that the computer's central processing unit (CPU) can execute directly. • Characteristics: • Binary Format: Machine code is written in binary, which makes it difficult for humans to read and write. • Hardware-Specific: Machine code is specific to a computer's architecture, meaning that code written for one type of CPU won't necessarily run on another. • Fast Execution: Programs written in machine code are executed directly by the hardware, making them extremely fast but difficult to debug and maintain. • Use Cases: • Machine code is primarily used in low-level programming, such as writing operating system kernels, firmware, and device drivers.
  • 20. Procedural Programming •Definition: •Procedural programming is a programming paradigm based on the concept of procedure calls, where procedures, also known as functions, are a sequence of instructions that perform a specific task. •Key Concepts: •Functions/Procedures: Central to procedural programming, functions encapsulate reusable blocks of code. •Sequence, Selection, Iteration: Control structures in procedural programming include sequences of instructions, conditional statements (like if-else), and loops (like for, while). •Global and Local Variables: Variables can be defined globally (accessible throughout the program) or locally within a function. •Advantages: •Simplicity: Procedural programming is straightforward and easy to understand, especially for small programs. •Reusability: Functions can be reused across different parts of a program, reducing redundancy. •Limitations: •Scalability: Procedural programming can become unwieldy in large projects, as it doesn’t naturally support concepts like data encapsulation. •Maintenance: Managing and modifying large procedural codebases can be challenging as they grow in complexity. •Examples: •Languages like C, Pascal, and BASIC are well-known for procedural programming.
  • 21. Object-Oriented Programming (OOP) •Definition: •Object-Oriented Programming is a paradigm that uses "objects"—which are instances of classes— to design and structure software. It models real-world entities using objects that encapsulate both data and behavior. •Key Concepts: •Classes and Objects: A class is a blueprint for creating objects. An object is an instance of a class containing attributes (data) and methods (functions). •Encapsulation: Bundling data and methods that operate on the data within a single unit, or class, and restricting access to some of the object’s components. •Inheritance: The ability to create a new class based on an existing class, inheriting attributes and behaviors from the parent class. •Polymorphism: The ability to process objects differently depending on their data type or class. For example, the same method name can be used in different classes. •Abstraction: The concept of hiding complex implementation details and showing only the essential features of an object.
  • 22. Advantages: Modularity: OOP helps in organizing code into manageable sections or classes. Code Reusability: Inheritance allows for code reuse across multiple classes. Maintainability: The modularity of OOP makes it easier to update and maintain code. Scalability: OOP naturally supports more complex and scalable systems compared to procedural programming. Limitations: Complexity: OOP can be more complex to learn and implement, especially for beginners. Overhead: The abstraction layers in OOP may introduce overhead, which can affect performance. Examples: Java, C++, Python, and Ruby are popular object-oriented languages.
  • 23. Direct Procedure Call: • What it is: In a direct procedure call, the method to be invoked is determined at compile-time. This is typical in languages with static binding. • Overhead: The overhead is minimal because the address of the method is known at compile-time, allowing for efficient calls. There's no need to determine which method to execute at runtime.
  • 24. Dynamically Dispatched Message Calls: • What it is: In dynamically dispatched calls, the method to be invoked is determined at runtime. This is typical in object-oriented languages with dynamic binding (e.g., polymorphism in Java, C++). • Overhead: The overhead is higher than in direct calls because the runtime system must determine the appropriate method to invoke. This typically involves looking up a method in a table (such as a vtable in C++), which adds additional processing time.
  • 25. Key Differences: 1.Method Resolution: 1. Direct Procedure Call: The method is resolved at compile-time. 2. Dynamically Dispatched Call: The method is resolved at runtime. 2.Efficiency: 1. Direct Procedure Call: More efficient due to compile-time resolution. 2. Dynamically Dispatched Call: Less efficient because of the additional runtime lookup.
  • 26. Direct Procedure Call: class A { void display() { System.out.println("A's display"); } } public class Main { public static void main(String[] args) { A a = new A(); a.display(); // Direct call to A's display method. } }
  • 27. class A { void display() { System.out.println("A's display"); } } class B extends A { void display() { System.out.println("B's display"); } } public class Main { public static void main(String[] args) { A a = new B(); a.display(); // Dynamically dispatched call to B's display method. } } Dynamically Dispatched Message Calls
  • 28. Suitability of Multiple Paradigms in Programming Languages •Definition: •Multi-paradigm programming languages support more than one programming paradigm, allowing developers to choose the best approach for solving a particular problem. •Advantages of Multi-Paradigm Languages: •Flexibility: Developers can choose the most appropriate paradigm for each part of the application. For example, procedural code might be used for simple tasks, while OOP is used for more complex structures. •Enhanced Problem-Solving: Different paradigms can offer different perspectives on problem-solving. For example, functional programming emphasizes immutability and pure functions, which can be beneficial for parallel processing. •Code Reusability: Code written in different paradigms can be reused across different parts of the application, promoting DRY (Don’t Repeat Yourself) principles.
  • 29. Examples of Multi-Paradigm Languages: •Python: Supports procedural, object-oriented, and functional programming paradigms. •C++: Primarily known for its object-oriented capabilities, but also supports procedural and generic programming. •JavaScript: Supports procedural, object-oriented, and functional programming, making it versatile for different use cases. •Suitability: •Multi-paradigm languages are particularly suitable for large, complex projects where different parts of the application may benefit from different programming paradigms. They offer flexibility and adaptability, allowing developers to leverage the strengths of each paradigm to create robust, efficient, and maintainable software solutions.
  • 30. . Java Serialization is the mechanism in which the state of an object is converted to a byte stream so that the byte stream can be reverted back into the copy of the object written into a file or stored into a database. It includes information about object’s type and types of data stored in the object. Once the object is serialized into written into a file, it can be read from the file and deserialized which means that the type of information and bytes that represent the object can be utilized to create object in the memory Serialization in Java
  • 31. Copyright @ 2015 Learntek. All Rights Reserved. 31 Let’s understand streams in computer systems before proceeding further. A stream is simply a sequence of data elements. Data in the form of streams is generated from a source and consumed at a destination. Different data streams in the computer systems are: 1. Byte Stream – It is a low level I/O operation and does not have any encoding scheme. The Java program should have a buffered approach for I/O operations to process Byte Stream.
  • 32. Copyright @ 2015 Learntek. All Rights Reserved. 32 2. Data Stream – Data Stream allows to read-write primitive data types and used to perform binary I/O operations on primitive data types. I/O operations can be performed for byte, char, boolean, short, int, long, float, double and strings efficiently and conveniently. 3. Character Stream – The character stream can easily translate to and from local character set unlike byte stream. It has a proper encoding scheme such as UNICODE, ASCII and is composed of streams of characters. In Java UNICODE system is followed to store characters as character stream.
  • 33. Copyright @ 2015 Learntek. All Rights Reserved. 33 4. Object Stream – Object stream can covert the state of an object into a byte stream so that it can be stored into a database, file or transported to any location (Serialization) and used at a later point of time for retrieving the stored values and restoring the old state of the object. The process of serialization is instance independent which means an object can be serialized on one platform and deserialized on an entirely different platform. To make an object serializable we use java.io.Serializable interface.
  • 35. import java.io.*; class SerializationExample { public static void main(String args[]) { try { Employee emp =new Employee(1187345,"Andrew"); //Creating the object FileOutputStream fout=new FileOutputStream("employee data.txt"); //Creating stream and writing the object ObjectOutputStream out=new ObjectOutputStream(fout); out.writeObject(emp); out.flush(); out.close(); //closing the stream System.out.println("Data has been read from the file."); } catch(Exception e) { e.printStackTrace(); } }
  • 36. import java.io.*; class DeserializationExample { public static void main(String args[]) { try { //Creating stream to read the object ObjectInputStream in=new ObjectInputStream(new FileInputStream("employee data.txt")); Employee emp=(Employee)in.readObject(); //printing the data of the serialized object System.out.println(emp.empid+" "+emp.empname); //closing the stream in.close(); } catch(Exception e) { e.printStackTrace(); } } }
  • 37. parallel Computing • Parallel computing refers to the use of multiple processors or computing resources to solve a computational problem or perform a task simultaneously. • It involves breaking down a problem into smaller parts that can be solved concurrently or in parallel, thus achieving faster execution and increased computational power. • Parallel computing can be applied to various types of problems, ranging from computationally intensive scientific simulations and data analysis to web servers handling multiple requests simultaneously. • It is particularly beneficial for tasks that can be divided into independent subtasks that can be executed concurrently. • There are different models and approaches to parallel computing: Task Parallelism: • In task parallelism, the problem is divided into multiple independent tasks or subtasks that can be executed concurrently. • Each task is assigned to a separate processing unit or thread, allowing multiple tasks to be processed simultaneously. • Task parallelism is well-suited for irregular or dynamic problems where the execution time of each task may vary.
  • 38. Data Parallelism: • Data parallelism involves dividing the data into smaller chunks and processing them simultaneously on different processing units. • Each unit operates on its portion of the data, typically applying the same computation or algorithm to each chunk. •Data parallelism is commonly used in scientific simulations, image processing, and numerical computations. Message Passing: • Message passing involves dividing the problem into smaller tasks that communicate and exchange data by sending messages to each other. • Each task operates independently and exchanges information with other tasks as needed. • This approach is commonly used in distributed systems and parallel computing frameworks such as MPI (Message Passing Interface). Shared Memory: Shared memory parallelism involves multiple processors or threads accessing and modifying a shared memory space. This model allows parallel tasks to communicate and synchronize by reading and writing to shared memory locations. Programming models such as OpenMP and Pthreads utilize shared memory parallelism. •
  • 39. •Task Parallelism: Different tasks, done simultaneously. •Data Parallelism: The same task, done on different parts. •Message Passing: Different tasks, done in different places, requiring commnication. •Shared Memory: Different tasks, done together in the same place, sharing resources.
  • 40. • Parallel computing offers several benefits, including: Increased speed: • By dividing the problem into smaller parts and executing them simultaneously, parallel computing can significantly reduce the overall execution time and achieve faster results. Enhanced scalability: • Parallel computing allows for the efficient utilization of multiple processing units or resources, enabling systems to scale and handle larger workloads. Improved performance: • Parallel computing enables the execution of complex computations and simulations that would otherwise be infeasible or take an impractical amount of time with sequential processing.
  • 41. • However, parallel computing also introduces challenges such as load balancing, data synchronization, and communication overhead. • Proper design and optimization techniques are essential to ensure efficient and effective parallel execution. • Overall, parallel computing is a powerful approach for achieving high- performance computing and tackling complex problems by harnessing the capabilities of multiple processing units or resources. • It plays a crucial role in various domains, including scientific research, data analysis, artificial intelligence,and large-scale computing systems