🔍 DeepPavlov: An open-source library for end-to-end dialogue systems and chatbots
article: https://medium.com/tensorflow/deeppavlov-an-open-source-library-for-end-to-end-dialog-systems-and-chatbots-31cf26849e37
research: https://colab.research.google.com/github/deepmipt/dp_notebooks/blob/master/DP_tf.ipynb
code: https://github.com/deepmipt/DeepPavlov
article: https://medium.com/tensorflow/deeppavlov-an-open-source-library-for-end-to-end-dialog-systems-and-chatbots-31cf26849e37
research: https://colab.research.google.com/github/deepmipt/dp_notebooks/blob/master/DP_tf.ipynb
code: https://github.com/deepmipt/DeepPavlov
Medium
DeepPavlov: an open-source library for end-to-end dialog systems and chatbots
A guest post by Vasily Konovalov
⭐️Fine-Tuning GPT-2 from Human Preferences
#OpenAI team fine-tuned 774M parameters model to achieve better scores in #summarization and stylistic text continuation in terms of human understanding.
Article definately worths reading (approx 15 min.) with Challenges and lessons learned section and examples.
Link: https://openai.com/blog/fine-tuning-gpt-2/
Paper: https://arxiv.org/abs/1909.08593
Code: https://github.com/openai/lm-human-preferences
#NLP #NLU #finetuning
#OpenAI team fine-tuned 774M parameters model to achieve better scores in #summarization and stylistic text continuation in terms of human understanding.
Article definately worths reading (approx 15 min.) with Challenges and lessons learned section and examples.
Link: https://openai.com/blog/fine-tuning-gpt-2/
Paper: https://arxiv.org/abs/1909.08593
Code: https://github.com/openai/lm-human-preferences
#NLP #NLU #finetuning
Openai
Fine-tuning GPT-2 from human preferences
We’ve fine-tuned the 774M parameter GPT-2 language model using human feedback for various tasks, successfully matching the preferences of the external human labelers, though those preferences did not always match our own. Specifically, for summarization tasks…
🗣 Using AI-generated questions to train NLP systems
https://ai.facebook.com/blog/research-in-brief-unsupervised-question-answering-by-cloze-translation/
code:
https://github.com/facebookresearch/UnsupervisedQA
paper:
https://research.fb.com/publications/unsupervised-question-answering-by-cloze-translation/
https://ai.facebook.com/blog/research-in-brief-unsupervised-question-answering-by-cloze-translation/
code:
https://github.com/facebookresearch/UnsupervisedQA
paper:
https://research.fb.com/publications/unsupervised-question-answering-by-cloze-translation/
Facebook
Research in Brief: Unsupervised Question Answering by Cloze Translation
Facebook AI is releasing code for a self-supervised technique that uses AI-generated questions to train NLP systems, avoiding the need for labeled question answering training data.
Neural networks in NLP are vulnerable to adversarially crafted inputs.
We show that they can be trained to become certifiably robust against input perturbations such as typos and synonym substitution in text classification:
https://arxiv.org/abs/1909.01492
We show that they can be trained to become certifiably robust against input perturbations such as typos and synonym substitution in text classification:
https://arxiv.org/abs/1909.01492
Depth Hints are complementary depth suggestions which improve monocular depth estimation algorithms trained from stereo pairs
code:
https://github.com/nianticlabs/depth-hints
paper:
https://arxiv.org/abs/1909.09051
dataset :
https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html
code:
https://github.com/nianticlabs/depth-hints
paper:
https://arxiv.org/abs/1909.09051
dataset :
https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html
GitHub
GitHub - nianticlabs/depth-hints: [ICCV 2019] Depth Hints are complementary depth suggestions which improve monocular depth estimation…
[ICCV 2019] Depth Hints are complementary depth suggestions which improve monocular depth estimation algorithms trained from stereo pairs - nianticlabs/depth-hints
Light regression analysis of some Microsoft employees salary distrubution
How basic knowledge of regression and couple of graphs can make an information look much better and clear.
Link: https://onezero.medium.com/leak-of-microsoft-salaries-shows-fight-for-higher-compensation-3010c589b41e
#regression #simple #salary #infographic
How basic knowledge of regression and couple of graphs can make an information look much better and clear.
Link: https://onezero.medium.com/leak-of-microsoft-salaries-shows-fight-for-higher-compensation-3010c589b41e
#regression #simple #salary #infographic
Medium
Leak of Microsoft Salaries Shows Fight for Higher Compensation
The numbers range from $40,000 to $320,000 and reveal key details about how pay works at big tech companies
100,000 FACES GENERATED BY AI FREE FOR ANY USE
https://generated.photos/
https://drive.google.com/drive/folders/1wSy4TVjSvtXeRQ6Zr8W98YbSuZXrZrgY
https://generated.photos/
https://drive.google.com/drive/folders/1wSy4TVjSvtXeRQ6Zr8W98YbSuZXrZrgY
generated.photos
Generated Photos | Unique, worry-free model photos
AI-generated images have never looked better. Explore and download our diverse, copyright-free headshot images from our production-ready database.
FSGAN: Subject Agnostic Face Swapping and Reenactment
New paper on #DeepFakes creation
YouTube demo:
https://www.youtube.com/watch?v=duo-tHbSdMk
Link:
https://nirkin.com/fsgan/
ArXiV:
https://arxiv.org/pdf/1908.05932.pdf
#FaceSwap #DL #Video #CV
New paper on #DeepFakes creation
YouTube demo:
https://www.youtube.com/watch?v=duo-tHbSdMk
Link:
https://nirkin.com/fsgan/
ArXiV:
https://arxiv.org/pdf/1908.05932.pdf
#FaceSwap #DL #Video #CV
YouTube
New Face Swapping AI Creates Amazing DeepFakes!
📝 The paper "FSGAN: Subject Agnostic Face Swapping and Reenactment" is available here:
https://nirkin.com/fsgan/
❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers
🙏 We would like to thank our generous Patreon supporters…
https://nirkin.com/fsgan/
❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers
🙏 We would like to thank our generous Patreon supporters…
Torchdata is PyTorch oriented library focused on data processing and input pipelines in general
https://github.com/szymonmaszke/torchdata
https://github.com/szymonmaszke/torchdata
GitHub
GitHub - szymonmaszke/torchdatasets: PyTorch dataset extended with map, cache etc. (tensorflow.data like)
PyTorch dataset extended with map, cache etc. (tensorflow.data like) - szymonmaszke/torchdatasets
2_5203986206391534542.pdf
1.5 MB
Sarbazi, M., Sadeghzadeh, M., & Mir Abedini, S. J. (2019). Improving resource allocation in software-defined networks using clustering. Cluster Computing.
doi:10.1007/s10586-019-02985-3
❇️ @AI_Python_EN
doi:10.1007/s10586-019-02985-3
❇️ @AI_Python_EN
AI, Python, Cognitive Neuroscience
2_5203986206391534542.pdf
If you just published a paper let us inform other members.
@ai_python_en
@ai_python_en
Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis
pdf: https://arxiv.org/pdf/1909.12224.pdf
abs: https://arxiv.org/abs/1909.12224
project page: https://svip-lab.github.io/project/impersonator.html
github: https://github.com/svip-lab/imper
pdf: https://arxiv.org/pdf/1909.12224.pdf
abs: https://arxiv.org/abs/1909.12224
project page: https://svip-lab.github.io/project/impersonator.html
github: https://github.com/svip-lab/imper
arXiv.org
Liquid Warping GAN: A Unified Framework for Human Motion...
We tackle the human motion imitation, appearance transfer, and novel view synthesis within a unified framework, which means that the model once being trained can be used to handle all these tasks....
#AI for Mammography and Digital Breast Tomosynthesis: Current Concepts and Future Perspectives. Krzysztof et al explain in the newest Radiology article below.
http://bit.ly/2kULbDz
http://bit.ly/2kULbDz
pubs.rsna.org
Artificial Intelligence for Mammography and Digital Breast Tomosynthesis: Current Concepts and Future Perspectives | Radiology
Although computer-aided diagnosis (CAD) is widely used in mammography, conventional CAD programs that use prompts to indicate potential cancers on the mammograms have not led to an improvement in d...
PyTorch implementations of deep reinforcement learning algorithms and environments
GitHub, by Petros Christodoulou : https://lnkd.in/eRZCQ-d
#pytorch #reinforcementlearning #deeplearning
GitHub, by Petros Christodoulou : https://lnkd.in/eRZCQ-d
#pytorch #reinforcementlearning #deeplearning
GitHub
GitHub - p-christ/Deep-Reinforcement-Learning-Algorithms-with-PyTorch: PyTorch implementations of deep reinforcement learning algorithms…
PyTorch implementations of deep reinforcement learning algorithms and environments - GitHub - p-christ/Deep-Reinforcement-Learning-Algorithms-with-PyTorch: PyTorch implementations of deep reinforce...
Google researchers just released #ALBERT , that has beaten all models across various benchmarks.
Also, did you know that most NLP models achieves performance that outpaces average human performance?
——————————————————
ALBERT uses parameter reduction techniques to lower memory consumption and increase the training speed of BERT
1. They topped GLUE ( https://lnkd.in/dkWNRVk ) — 92.2%
2. SQuAD (https://lnkd.in/d_Xrba8 ) leaderboards. — 89.4%
3. RACE - they came third with their ensemble model (https://lnkd.in/d2yWbtC ) — 89.4%
——————————————————
Paper at openreview: https://lnkd.in/dzRvWYS
#deeplearning #machinelearning #NLU #NLG #artificiallintelligence #ai
Also, did you know that most NLP models achieves performance that outpaces average human performance?
——————————————————
ALBERT uses parameter reduction techniques to lower memory consumption and increase the training speed of BERT
1. They topped GLUE ( https://lnkd.in/dkWNRVk ) — 92.2%
2. SQuAD (https://lnkd.in/d_Xrba8 ) leaderboards. — 89.4%
3. RACE - they came third with their ensemble model (https://lnkd.in/d2yWbtC ) — 89.4%
——————————————————
Paper at openreview: https://lnkd.in/dzRvWYS
#deeplearning #machinelearning #NLU #NLG #artificiallintelligence #ai
Artificial Design: Modeling Artificial Super Intelligence with Extended General Relativity and Universal Darwinism via Geometrization for Universal Design Automation
https://openreview.net/forum?id=SyxQ_TEFwS
https://openreview.net/forum?id=SyxQ_TEFwS
Bias and Generalization in Deep Generative Models
Blog by Zhao et al.: https://lnkd.in/eRAhsuS
#DeepLearning #GenerativeModels #MachineLearning
Blog by Zhao et al.: https://lnkd.in/eRAhsuS
#DeepLearning #GenerativeModels #MachineLearning
ermongroup.github.io
Bias and Generalization in Deep Generative Models
Research
Self-Paced Learning:
- supervised method from 2010 #NIPS
- idea: start learning with the easiest samples first and only then learn the difficult ones
- distinct from curriculum learning, where samples are pre-classified to easy/hard: we need to decide the order on our own
sample in a latent model (outliers will be the hardest)
- a better measure (!): how good are the initial predictions for the sample (samples far away from the decision boundary are the easiest).
- for #classification, samples are only easy in context of other samples!
- the set of easy samples is iteratively enlarged
- results: outperforms CCCP in #DNA Motif Finding, handwritten digit recognition and others problems
- link: https://papers.nips.cc/paper/3923-self-paced-learning-for-latent-variable-models
- supervised method from 2010 #NIPS
- idea: start learning with the easiest samples first and only then learn the difficult ones
- distinct from curriculum learning, where samples are pre-classified to easy/hard: we need to decide the order on our own
sample in a latent model (outliers will be the hardest)
- a better measure (!): how good are the initial predictions for the sample (samples far away from the decision boundary are the easiest).
- for #classification, samples are only easy in context of other samples!
- the set of easy samples is iteratively enlarged
- results: outperforms CCCP in #DNA Motif Finding, handwritten digit recognition and others problems
- link: https://papers.nips.cc/paper/3923-self-paced-learning-for-latent-variable-models
papers.nips.cc
Self-Paced Learning for Latent Variable Models
Electronic Proceedings of Neural Information Processing Systems