SlideShare a Scribd company logo
PREDICTING DRUG TARGET
INTERATION USING ADAPTIVE
BELIEF NETWORK
Rashim Dhaubanjar (BCT/070/129)
Rupika Bista (BCT/070/131)
Shiva Gautam (BCT/070/192)
Sunil Bist (BCT/070/192)
INTRODUCTION
• Drug development - an expensive and time-consuming process with
extremely low success rate
• A core problem in pharmacology is the determination of interactions
between drug compounds and target proteins in order to understand
and study their effects.
• In this project we generalize tge applicability of the method so called
PDTI for drug compounds for which no interactions are known.
PROBLEM STATEMENT
• Very low known drug-target interactions
• No model with maximum accuracy in drug-target interaction with
lowest side-effect provision
• Time complexity
OBJECTIVES
• Build PDTI using Adaptive Deep Belief Network framework
• Apply GPU for the parallel implementation of different phases of
Deep Belief Network
• Evaluate the model by applying cross-validation technique with
standard dataset
Deep Belief Networks (DBN)
• Stack of several RBMs together.
Restricted Boltzmann Machines
activation f((weight w * input x) + bias b ) = output a
Reconstructions
DBNs can be trained by a greedy layer-wise
approach:
• Train first layer using your data without the labels (unsupervised).
• Freeze the first layer parameters and start training the second layer
using the output of the first layer as the unsupervised input to the
second layer.
• Use the outputs of the final layer as inputs to a supervised
layer/model and train the last supervised layer(s) (leave early
weights frozen).
• Unfreeze all weights and fine tune the full network by training with a
supervised approach, given the pre-processed weight settings.
Unsupervised Pretraining
Feed-forward nets
Information flow is unidirectional
Data is presented to Input layer
Passed on to Hidden Layer
Passed on to Output layer
Information is distributed
Information processing is parallel
Internal representation (interpretation) of data
Backpropagation
• The Backpropagation algorithm is a sensible approach for dividing the
contribution of each weight.
GPU Implementation
• Training a DBN is a computationally expensive task that involves
training several RBMs and may require a considerable amount of time
Solution?
• GPU Parallel implementation
GPU: GeForce 840M
Cores / Texture
CUDA: 5.0
CUDA cores: 384
Linear Vs Matrix(GPU)
0
100
200
300
400
500
600
700
800
900
1000
Iteration 1 Iteration 5 Iteration 10
Chart Title
Linear Matrix(GPU)
Linear Vs Matrix(GPU)
0
100
200
300
400
500
600
700
800
900
1000
Iteration 1 Iteration 5 Iteration 10
Linear Matrix(GPU)
TASK DIVISION
LANGUAGES USED
• Java – Implementation of Algorithms
• JCuda – Programming for GPU
• VBA – Integration with Excel
LIMITATIONS
• Protein sequence feature domain profile not used in training purpose
• Only drug compound 2D substructure finger print used as pre-
training dataset.
Work Complete
• completed the process of algorithm development followed by testing
and debugging of the developed algorithm and also implemented in
code.
Work Remaining
• develop a proper GUI where input will be the name of drug and the
output will be interaction with protein (1 if yes and 0 if no).
THANK YOU

More Related Content

PDF
Adaptive Training of Radial Basis Function Networks Based on Cooperative
PDF
Radial Basis Function
PDF
A0270107
PDF
RADIAL BASIS FUNCTION PROCESS NEURAL NETWORK TRAINING BASED ON GENERALIZED FR...
DOCX
Reversible data hiding with optimal value transfer
DOCX
Reversible data hiding with optimal value transfer
PDF
Presentation: Wind Speed Prediction using Radial Basis Function Neural Network
PDF
Introduction to Radial Basis Function Networks
Adaptive Training of Radial Basis Function Networks Based on Cooperative
Radial Basis Function
A0270107
RADIAL BASIS FUNCTION PROCESS NEURAL NETWORK TRAINING BASED ON GENERALIZED FR...
Reversible data hiding with optimal value transfer
Reversible data hiding with optimal value transfer
Presentation: Wind Speed Prediction using Radial Basis Function Neural Network
Introduction to Radial Basis Function Networks

What's hot (18)

PDF
Cell Charge Approximation for Accelerating Molecular Simulation on CUDA-Enabl...
PDF
Energy efficient-resource-allocation-in-distributed-computing-systems
PPTX
Icacci 2014 image security_presentation final
PDF
Optimum capacity allocation of distributed generation
PDF
Large Scale Kernel Learning using Block Coordinate Descent
PDF
Implementation of linear regression and logistic regression on Spark
PDF
Anima Anadkumar, Principal Scientist, Amazon Web Services, Endowed Professor,...
PDF
A0420105
PDF
An Uncertainty-Aware Approach to Optimal Configuration of Stream Processing S...
PDF
Optimal buffer allocation in
PDF
consistency regularization for generative adversarial networks_review
PPTX
CPQ_presentation_ICCV2021
PDF
PR-187 : MorphNet: Fast & Simple Resource-Constrained Structure Learning of D...
PPTX
Crop classification using supervised learning techniques
PDF
Opml 19-presentation-pdf
PDF
IEEEFYP2014poster Track 5 Vincent Yeong Chun Kiat
PDF
WIND SPEED & POWER FORECASTING USING ARTIFICIAL NEURAL NETWORK (NARX) FOR NEW...
PPTX
Practical tips for handling noisy data and annotaiton
Cell Charge Approximation for Accelerating Molecular Simulation on CUDA-Enabl...
Energy efficient-resource-allocation-in-distributed-computing-systems
Icacci 2014 image security_presentation final
Optimum capacity allocation of distributed generation
Large Scale Kernel Learning using Block Coordinate Descent
Implementation of linear regression and logistic regression on Spark
Anima Anadkumar, Principal Scientist, Amazon Web Services, Endowed Professor,...
A0420105
An Uncertainty-Aware Approach to Optimal Configuration of Stream Processing S...
Optimal buffer allocation in
consistency regularization for generative adversarial networks_review
CPQ_presentation_ICCV2021
PR-187 : MorphNet: Fast & Simple Resource-Constrained Structure Learning of D...
Crop classification using supervised learning techniques
Opml 19-presentation-pdf
IEEEFYP2014poster Track 5 Vincent Yeong Chun Kiat
WIND SPEED & POWER FORECASTING USING ARTIFICIAL NEURAL NETWORK (NARX) FOR NEW...
Practical tips for handling noisy data and annotaiton
Ad

Similar to Predicting Drug Target Interaction Using Deep Belief Network (20)

PDF
Deep learning optimization for drug-target interaction prediction in COVID-19...
PPTX
NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...
PDF
PERFORMANCE ANALYSIS OF NEURAL NETWORK MODELS FOR OXAZOLINES AND OXAZOLES DER...
PDF
ELRIG Event Biocity Scotland May19
PDF
PERFORMANCE ANALYSIS OF NEURAL NETWORK MODELS FOR OXAZOLINES AND OXAZOLES DER...
PDF
[台灣人工智慧學校] 主題演講 - 張智威總經理 (President of HTC DeepQ)
PDF
Machine learning in computational docking
PDF
Performance analysis of neural network models for oxazolines and oxazoles der...
PDF
A Survey of Deep Learning Algorithms for Malware Detection
PDF
Deep learning for biomedicine
PDF
Kernel based approaches in drug target interaction prediction
PPTX
Neural Networks in computational biology.pptx
PPTX
Dissertation Prsentation - Vaibhav
PDF
Deep Learning on nVidia GPUs for QSAR, QSPR and QNAR predictions
PPTX
Deep learning health care
PDF
Biotech Data Science @ GUGC in Korea: Deep Learning for Prediction of Drug-Ta...
PDF
AI-augmented Drug Discovery.pdf
PDF
AI-Based Antibody Screening.pdf
PDF
A Platform for Accelerating Machine Learning Applications
PDF
Deep Learning Demystified
Deep learning optimization for drug-target interaction prediction in COVID-19...
NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...
PERFORMANCE ANALYSIS OF NEURAL NETWORK MODELS FOR OXAZOLINES AND OXAZOLES DER...
ELRIG Event Biocity Scotland May19
PERFORMANCE ANALYSIS OF NEURAL NETWORK MODELS FOR OXAZOLINES AND OXAZOLES DER...
[台灣人工智慧學校] 主題演講 - 張智威總經理 (President of HTC DeepQ)
Machine learning in computational docking
Performance analysis of neural network models for oxazolines and oxazoles der...
A Survey of Deep Learning Algorithms for Malware Detection
Deep learning for biomedicine
Kernel based approaches in drug target interaction prediction
Neural Networks in computational biology.pptx
Dissertation Prsentation - Vaibhav
Deep Learning on nVidia GPUs for QSAR, QSPR and QNAR predictions
Deep learning health care
Biotech Data Science @ GUGC in Korea: Deep Learning for Prediction of Drug-Ta...
AI-augmented Drug Discovery.pdf
AI-Based Antibody Screening.pdf
A Platform for Accelerating Machine Learning Applications
Deep Learning Demystified
Ad

Recently uploaded (20)

PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PDF
Anesthesia in Laparoscopic Surgery in India
PPTX
GDM (1) (1).pptx small presentation for students
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PDF
Weekly quiz Compilation Jan -July 25.pdf
PDF
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
Microbial disease of the cardiovascular and lymphatic systems
DOC
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
O7-L3 Supply Chain Operations - ICLT Program
PDF
Trump Administration's workforce development strategy
Pharmacology of Heart Failure /Pharmacotherapy of CHF
Microbial diseases, their pathogenesis and prophylaxis
Anesthesia in Laparoscopic Surgery in India
GDM (1) (1).pptx small presentation for students
Supply Chain Operations Speaking Notes -ICLT Program
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
Weekly quiz Compilation Jan -July 25.pdf
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
STATICS OF THE RIGID BODIES Hibbelers.pdf
O5-L3 Freight Transport Ops (International) V1.pdf
Microbial disease of the cardiovascular and lymphatic systems
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
FourierSeries-QuestionsWithAnswers(Part-A).pdf
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
Chinmaya Tiranga quiz Grand Finale.pdf
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
O7-L3 Supply Chain Operations - ICLT Program
Trump Administration's workforce development strategy

Predicting Drug Target Interaction Using Deep Belief Network

Editor's Notes

  • #7: At node 1 of the hidden layer, x is multiplied by a weight and added to a so-called bias. The result of those two operations is fed into an activation function, which produces the node’s output, or the strength of the signal passing through it, given input x.
  • #8: But in this introduction to restricted Boltzmann machines, we’ll focus on how they learn to reconstruct data by themselves in an unsupervised fashion (unsupervised means without ground-truth labels in a test set), making several forward and backward passes between the visible layer and hidden layer no. 1 without involving a deeper network. In the reconstruction phase, the activations of hidden layer no. 1 become the input in a backward pass. They are multiplied by the same weights, one per internode edge, just as x was weight-adjusted on the forward pass. The sum of those products is added to a visible-layer bias at each visible node, and the output of those operations is a reconstruction; i.e. an approximation of the original input. This can be represented by the following diagram:
  • #10: Pre-training a DNN. The first row represents the first layer being trained to reproduce X, the second row is where the first layer’s weights are fixed and the second layer is trained to reproduce the output of the first layer, and so on. Finally we make use of the supervised data to learn all the weights after they have been initialised to the weights learnt in the previous step
  • #12: The ideas of the algorithm is: Computes the error term for the output units using the observed error. 2. From output layer, repeat propagating the error term back to the previous layer and updating the weights between the two layers until the earliest hidden layer is reached.