Predicting Popularity of The New York Times Comments (Part 1)
Article with the #NLP research of NYT comments. Nice example of the applied #research job.
Link: https://towardsdatascience.com/predicting-popularity-of-the-new-york-times-comments-part-1-d32f26261f6f
Github: https://github.com/sakshi716/nyt-nlp-capstone
Article with the #NLP research of NYT comments. Nice example of the applied #research job.
Link: https://towardsdatascience.com/predicting-popularity-of-the-new-york-times-comments-part-1-d32f26261f6f
Github: https://github.com/sakshi716/nyt-nlp-capstone
Medium
Predicting Popularity of The New York Times Comments (Part 1)
Hello everyone! I just finished my capstone project for my big data certification from Ryerson University in Toronto. This project was…
Predicting Popularity of The New York Times Comments (Part 1)
Article with the #NLP research of NYT comments. Nice example of the applied #research job.
Link: https://towardsdatascience.com/predicting-popularity-of-the-new-york-times-comments-part-1-d32f26261f6f
Github: https://github.com/sakshi716/nyt-nlp-capstone
Article with the #NLP research of NYT comments. Nice example of the applied #research job.
Link: https://towardsdatascience.com/predicting-popularity-of-the-new-york-times-comments-part-1-d32f26261f6f
Github: https://github.com/sakshi716/nyt-nlp-capstone
Medium
Predicting Popularity of The New York Times Comments (Part 1)
Hello everyone! I just finished my capstone project for my big data certification from Ryerson University in Toronto. This project was…
The lottery ticket hypothesis: finding sparse, trainable neural networks
Best paper award at #ICLR2019 main idea: dense, randomly-initialized, networks contain sparse subnetworks that trained in isolation reach test accuracy comparable to the original network. Thus compressing the original network up to 10% its original size.
Paper: https://arxiv.org/pdf/1803.03635.pdf
#nn #research
Best paper award at #ICLR2019 main idea: dense, randomly-initialized, networks contain sparse subnetworks that trained in isolation reach test accuracy comparable to the original network. Thus compressing the original network up to 10% its original size.
Paper: https://arxiv.org/pdf/1803.03635.pdf
#nn #research
PyTorch for research
PyTorch Lightning — The PyTorch Keras for ML researchers. More control. Less boilerplate.
Github: https://github.com/williamFalcon/pytorch-lightning
#PyTorch #Research #OpenSource
PyTorch Lightning — The PyTorch Keras for ML researchers. More control. Less boilerplate.
Github: https://github.com/williamFalcon/pytorch-lightning
#PyTorch #Research #OpenSource
GitHub
GitHub - Lightning-AI/pytorch-lightning: Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.
Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes. - Lightning-AI/pytorch-lightning
Practical ML Conf - The biggest offline ML conference of the year in Moscow.
- https://pmlconf.yandex.ru
- September 7, Moscow
- For speakers: offline
- For participants: offline and online (youtube)
- The conference language is Russian.
Call for propose is open https://pmlconf.yandex.ru/call_for_papers
#conference #nlp #cv #genAI #recsys #mlops #ecomm #hardware #research #offline #online
- https://pmlconf.yandex.ru
- September 7, Moscow
- For speakers: offline
- For participants: offline and online (youtube)
- The conference language is Russian.
Call for propose is open https://pmlconf.yandex.ru/call_for_papers
#conference #nlp #cv #genAI #recsys #mlops #ecomm #hardware #research #offline #online
Practical ML 2024 (PML) конференция для экспертов — использование ИИ для бизнеса | ML-конференция 2024 от Яндекса
Practical ML конференция для экспертов по внедрению ИИ в бизнес. Информационные доклады от ключевых разработчиков по работе с ML. PML Conf 2024 от компании Яндекс.