Forwarded from Parallel Experiments (Linghao Zhang)
Great thread on how a communication failure contributed to SVB’s collapse.
https://twitter.com/lulumeservey/status/1634232322693144576
https://twitter.com/lulumeservey/status/1634232322693144576
#ml
https://mlcontests.com/state-of-competitive-machine-learning-2022/
Quote from the report:
Successful competitors have mostly converged on a common set of tools — Python, PyData, PyTorch, and gradient-boosted decision trees.
Deep learning still has not replaced gradient-boosted decision trees when it comes to tabular data, though it does often seem to add value when ensembled with boosting methods.
Transformers continue to dominate in NLP, and start to compete with convolutional neural nets in computer vision.
Competitions cover a broad range of research areas including computer vision, NLP, tabular data, robotics, time-series analysis, and many others.
Large ensembles remain common among winners, though single-model solutions do win too.
There are several active machine learning competition platforms, as well as dozens of purpose-built websites for individual competitions.
Competitive machine learning continues to grow in popularity, including in academia.
Around 50% of winners are solo winners; 50% of winners are first-time winners; 30% have won more than once before.
Some competitors are able to invest significantly into hardware used to train their solutions, though others who use free hardware like Google Colab are also still able to win competitions.
https://mlcontests.com/state-of-competitive-machine-learning-2022/
Quote from the report:
Successful competitors have mostly converged on a common set of tools — Python, PyData, PyTorch, and gradient-boosted decision trees.
Deep learning still has not replaced gradient-boosted decision trees when it comes to tabular data, though it does often seem to add value when ensembled with boosting methods.
Transformers continue to dominate in NLP, and start to compete with convolutional neural nets in computer vision.
Competitions cover a broad range of research areas including computer vision, NLP, tabular data, robotics, time-series analysis, and many others.
Large ensembles remain common among winners, though single-model solutions do win too.
There are several active machine learning competition platforms, as well as dozens of purpose-built websites for individual competitions.
Competitive machine learning continues to grow in popularity, including in academia.
Around 50% of winners are solo winners; 50% of winners are first-time winners; 30% have won more than once before.
Some competitors are able to invest significantly into hardware used to train their solutions, though others who use free hardware like Google Colab are also still able to win competitions.
ML Contests
The State of Competitive Machine Learning | ML Contests
We summarise the state of the competitive landscape and analyse the 200+ competitions that took place in 2022. Plus a deep dive analysis of 67 winning solutions to figure out the best strategies to win at competitive ML.
#dl
https://github.com/Lightning-AI/lightning/releases/tag/2.0.0
You can compile (torch 2.0) LightningModule now.
https://github.com/Lightning-AI/lightning/releases/tag/2.0.0
You can compile (torch 2.0) LightningModule now.
import torch
import lightning as L
model = LitModel()
# This will compile forward and {training,validation,test,predict}_step
compiled_model = torch.compile(model)
trainer = L.Trainer()
trainer.fit(compiled_model)
GitHub
Release Lightning 2.0: Fast, Flexible, Stable · Lightning-AI/lightning
Lightning AI is excited to announce the release of Lightning 2.0 ⚡
Highlights
Backward Incompatible Changes
PyTorch
Fabric
Full Changelog
PyTorch
Fabric
App
Contributors
Over the last coupl...
Highlights
Backward Incompatible Changes
PyTorch
Fabric
Full Changelog
PyTorch
Fabric
App
Contributors
Over the last coupl...
#misc
This is how generative AI is changing our lives. Now thinking about it, those competitive advantages from our satisfying technical skills are fading away.
What shall we invest into for a better career? Just integrated whatever is coming into our workflow? Or fundamentally change the way we are thinking?
This is how generative AI is changing our lives. Now thinking about it, those competitive advantages from our satisfying technical skills are fading away.
What shall we invest into for a better career? Just integrated whatever is coming into our workflow? Or fundamentally change the way we are thinking?
#ml
Pérez J, Barceló P, Marinkovic J. Attention is Turing-Complete. J Mach Learn Res. 2021;22: 1–35. Available: https://jmlr.org/papers/v22/20-302.html
Pérez J, Barceló P, Marinkovic J. Attention is Turing-Complete. J Mach Learn Res. 2021;22: 1–35. Available: https://jmlr.org/papers/v22/20-302.html
#dl
I am experimenting with torch 2.0 and searching for potential training time improvements in lightning. The following article provides a very good introduction.
https://lightning.ai/pages/community/tutorial/how-to-speed-up-pytorch-model-training/
I am experimenting with torch 2.0 and searching for potential training time improvements in lightning. The following article provides a very good introduction.
https://lightning.ai/pages/community/tutorial/how-to-speed-up-pytorch-model-training/
Lightning AI
How to Speed Up PyTorch Model Training
Learn how to improve the training performance of your PyTorch model without compromising its accuracy.
#ai
A lot of big names signed it. (Not sure how they verify the signee though)
Personally, I'm not buying it.
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
A lot of big names signed it. (Not sure how they verify the signee though)
Personally, I'm not buying it.
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
Future of Life Institute
Pause Giant AI Experiments: An Open Letter - Future of Life Institute
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
#ai
The performance is not too bad. But…given this is about academic topics, it sounds terrible to have this level of hallucination.
https://bair.berkeley.edu/blog/2023/04/03/koala/
The performance is not too bad. But…given this is about academic topics, it sounds terrible to have this level of hallucination.
https://bair.berkeley.edu/blog/2023/04/03/koala/
The Berkeley Artificial Intelligence Research Blog
Koala: A Dialogue Model for Academic Research
The BAIR Blog
#data
Quite useful.
I use pyarrow a lot and also a bit of polars. Mostly because pandas is slow. With the new 2.0 release, all three libraries are seamlessly connected to each other.
https://datapythonista.me/blog/pandas-20-and-the-arrow-revolution-part-i
Quite useful.
I use pyarrow a lot and also a bit of polars. Mostly because pandas is slow. With the new 2.0 release, all three libraries are seamlessly connected to each other.
https://datapythonista.me/blog/pandas-20-and-the-arrow-revolution-part-i
datapythonista blog
pandas 2.0 and the Arrow revolution (part I)
Introduction At the time of writing this post, we are in the process of releasing pandas 2.0. The project has a large number of users,...
#ts
I love the last paragraph, especially this sentence:
> Unfortunately, I can’t continue my debate with Clive Granger. I rather hoped he would come to accept my point of view.
Rob J Hyndman - The difference between prediction intervals and confidence intervals
https://robjhyndman.com/hyndsight/intervals/
I love the last paragraph, especially this sentence:
> Unfortunately, I can’t continue my debate with Clive Granger. I rather hoped he would come to accept my point of view.
Rob J Hyndman - The difference between prediction intervals and confidence intervals
https://robjhyndman.com/hyndsight/intervals/
#code
To me, high cognitive load reduces my code quality. In thoery, there are many tricks to reduce cognitive load, e.g., better modularity. In practice, they are not always carried out. Will chatGPT help? Let’s see.
https://www.caitlinhudon.com/posts/programming-beyond-cognitive-limitations-with-ai
To me, high cognitive load reduces my code quality. In thoery, there are many tricks to reduce cognitive load, e.g., better modularity. In practice, they are not always carried out. Will chatGPT help? Let’s see.
https://www.caitlinhudon.com/posts/programming-beyond-cognitive-limitations-with-ai
Haystacks by Caitlin Hudon
Programming Beyond Cognitive Limitations with AI — Haystacks by Caitlin Hudon
Our natural processing power is limited, and leveraging AI for assistance can help us to use it more efficiently, especially when it comes to reading and understanding code. Grokking new code requires cognitive load — and can sometimes trigger cognitive…