Knowledge Graphs @ AAAI 2020
overview of several topics:
- KG-Augmented Language Models: in different flavours
- Entity Matching in Heterogeneous KGs: finally no manual mappings
- KG Completion and Link Prediction: neuro-symbolic and temporal KGs
- KG-based Conversational AI and Question Answering: going big
Link: https://medium.com/@mgalkin/knowledge-graphs-aaai-2020-c457ad5aafc0
#AAAI2020 #KnowledgeGraph #graph #kg
overview of several topics:
- KG-Augmented Language Models: in different flavours
- Entity Matching in Heterogeneous KGs: finally no manual mappings
- KG Completion and Link Prediction: neuro-symbolic and temporal KGs
- KG-based Conversational AI and Question Answering: going big
Link: https://medium.com/@mgalkin/knowledge-graphs-aaai-2020-c457ad5aafc0
#AAAI2020 #KnowledgeGraph #graph #kg
Medium
Knowledge Graphs @ AAAI 2020
The first major AI event of 2020 is already here! Hope you had a nice holiday break π, or happy New Year if your scientific calendarβ¦
ββA Deep Learning Approach to Antibiotic Discovery
A new antibiotic was found using DL, claimed to be effective against several bacteria resistant to existing antibiotics on mice in the lab.
The problem with finding good antibiotics is potential molecule space is prohibitively large to test all possible molecules in the lab. They train a model that receives the molecule as a graph and tries to predict how effective it is against E. coli.
For every edge, they run single-layer NN receiving activations and features of input node and edge features, and producing new activations for the edge. Activations of the node = sum of all incoming edge activations. Overall activations vector for molecule = sum of all nodes.
Finally, there's a 2-layer NN receiving overall vector for the molecule + some standard handcrafted features with a binary classification output, trained end-to-end on 2.3K molecules. Then, they predict on the dataset of 6K molecules being in different stages of the investigation.
They looked at the top 51 predictions and manually ranked them by not being similar to the training dataset, being far in the investigation stage and scoring low on the external toxicity model. The top pick is what they called Halicin and tested in the lab.
article (only pdf): https://www.cell.com/cell/pdf/S0092-8674(20)30102-1.pdf
ps
thx @Sim0nsays for his cool abstract from the twitter
#medicine #molecule #antibiotic #dl #graph
A new antibiotic was found using DL, claimed to be effective against several bacteria resistant to existing antibiotics on mice in the lab.
The problem with finding good antibiotics is potential molecule space is prohibitively large to test all possible molecules in the lab. They train a model that receives the molecule as a graph and tries to predict how effective it is against E. coli.
For every edge, they run single-layer NN receiving activations and features of input node and edge features, and producing new activations for the edge. Activations of the node = sum of all incoming edge activations. Overall activations vector for molecule = sum of all nodes.
Finally, there's a 2-layer NN receiving overall vector for the molecule + some standard handcrafted features with a binary classification output, trained end-to-end on 2.3K molecules. Then, they predict on the dataset of 6K molecules being in different stages of the investigation.
They looked at the top 51 predictions and manually ranked them by not being similar to the training dataset, being far in the investigation stage and scoring low on the external toxicity model. The top pick is what they called Halicin and tested in the lab.
article (only pdf): https://www.cell.com/cell/pdf/S0092-8674(20)30102-1.pdf
ps
thx @Sim0nsays for his cool abstract from the twitter
#medicine #molecule #antibiotic #dl #graph