ArtificialIntelligenceArticles
3.04K subscribers
1.64K photos
9 videos
5 files
3.86K links
for who have a passion for -
1. #ArtificialIntelligence
2. Machine Learning
3. Deep Learning
4. #DataScience
5. #Neuroscience

6. #ResearchPapers

7. Related Courses and Ebooks
Download Telegram
New #deeplearning paper at the intersection of #AI #mathematics #psychology and #neuroscience: A mathematical theory of semantic development in deep neural networks: arxiv.org/abs/1810.10531 https://t.me/ArtificialIntelligenceArticles
At the heart of most deep learning generalization bounds (VC, Rademacher, PAC-Bayes) is uniform convergence (u.c.). We argue why u. c. may be unable to provide a complete explanation of generalization, even if we take into account the implicit bias of SGD.

https://arxiv.org/pdf/1902.04742.pdf

https://t.me/ArtificialIntelligenceArticles
Looking to fall in love... with science? 😍 Help scientists train machines to study stroke lesions by swiping on our app:

https://braindrles.us/#/

#citizenscience #braindr #braindrles #neuroscience #machinelearning #swipesforscience #openscience #OHBM2019
Post-doc position in Deep learning and NLP at EMORY School of Medicine (Atlanta, USA)
Department of Biomedical Informatics at Emory School of Medicine is searching for a postdoctoral scholar. The Laboratory is led by Dr. Imon Banerjee (website), who is also affiliated with the Departments of Radiology and Biomedical Informatics at Emory University. The lab focuses on cutting‐edge research at the intersection of imaging science and biomedical informatics, developing and applying AI methods to large amounts of medical data for biomedical discovery, precision medicine, and precision health (early detection and prediction of future disease).



The postdoctoral scholar will be working on two core research topics: (1) develop foundational AI methods for analyzing and extracting information from clinical texts; (2) develop clinical prediction models using multi-modal and longitudinal electronic medical records (EMR) data. The scholar will deploy and evaluate these methods as clinical applications to transform medical care.



Requirements:



Post-graduate degree (PhD or MD, completed or near completion) in biomedical data science, informatics, computer science, engineering, statistics, computational biology, or a related field, with a background or interest in imaging



· Experience in machine learning and AI, particularly in computer vision and image analysis



· Strong record of distinguished scholarly achievement



· Outstanding communication and presentation skills with fluency in spoken and written English

https://t.me/ArtificialIntelligenceArticles

· Established record of distinguished scholarly achievement



Interested applicants should submit a Curriculum Vitae, a brief statement of research interests using this link: https://faculty-emory.icims.com/jobs/42390/job
AI meets physics - using artificial neural networks to approximate solutions of the three-body problem.


I'm increasingly intrigued by this paper (https://arxiv.org/pdf/1910.07291.pdf) showing the application of Artificial Neural networks to the infamously insoluble three-body problem in physics, where we try to work out the future position of three objects sometime in the future given Newton's equations of motion. I think it has important implications to how we think about approximation and how we achieve it in practice.

From the authors: "Our results provide evidence that, for computationally challenging regions of phase-space, a trained ANN can replace existing numerical solvers, enabling fast and scalable simulations of many-body systems to shed light on outstanding phenomena such as the formation of black-hole binary systems or the origin of the core collapse in dense star clusters."

https://t.me/ArtificialIntelligenceArticles
My position is very similar to Yoshua's.
Making sequential reasoning compatible with gradient-based learning is one of the challenges of the next decade.
But gradient-based learning applied to networks of parameterized modules (aka "deep learning") is part of the solution.


Gary Marcus likes to cite me when I talk about my current research program which studies the weaknesses of current deep learning systems in order to devise systems stronger in higher-level cognition and greater combinatorial (and systematic) generalization, including handling of causality and reasoning. He disagrees with the view that Yann LeCun, Geoff Hinton and I have expressed that neural nets can indeed be a "universal solvent" for incorporating further cognitive abilities in computers. He prefers to think of deep learning as limited to perception and needing to be combined in a hybrid with symbolic processing. I disagree in a subtle way with this view. I agree that the goals of GOFAI (like the ability to perform sequential reasoning characteristic of system 2 cognition) are important, but I believe that they can be performed while staying in a deep learning framework, albeit one which makes heavy use of attention mechanisms (hence my 'consciousness prior' research program) and the injection of new architectural (e.g. modularity) and training framework (e.g. meta-learning and an agent-based view). What I bet is that a simple hybrid in which the output of the deep net are discretized and then passed to a GOFAI symbolic processing system will not work. Why? Many reasons: (1) you need learning in the system 2 component as well as in the system 1 part, (2) you need to represent uncertainty there as well (3) brute-force search (the main inference tool of symbol-processing systems) does not scale, instead humans use unconscious (system 1) processing to guide the search involved in reasoning, so system 1 and system 2 are very tightly integrated and (4) your brain is a neural net all the way

https://t.me/ArtificialIntelligenceArticles
The Human Body is a Black Box": Supporting Clinical Decision-Making with Deep Learning
Mark Sendak et al.: https://arxiv.org/abs/1911.08089
#deeplearning #neuroscience #artificialintelligence