#ml
Mitchell M, Wu S, Zaldivar A, Barnes P, Vasserman L, Hutchinson B, et al. Model cards for model reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM; 2019. doi:10.1145/3287560.3287596
https://arxiv.org/abs/1810.03993
Mitchell M, Wu S, Zaldivar A, Barnes P, Vasserman L, Hutchinson B, et al. Model cards for model reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM; 2019. doi:10.1145/3287560.3287596
https://arxiv.org/abs/1810.03993
#fun
😂😂😂
[P] No, we don't have to choose batch sizes as powers of 2: MachineLearning
https://www.reddit.com/r/MachineLearning/comments/vs1wox/p_no_we_dont_have_to_choose_batch_sizes_as_powers/
😂😂😂
[P] No, we don't have to choose batch sizes as powers of 2: MachineLearning
https://www.reddit.com/r/MachineLearning/comments/vs1wox/p_no_we_dont_have_to_choose_batch_sizes_as_powers/
Reddit
From the MachineLearning community on Reddit: [P] No, we don't have to choose batch sizes as powers of 2
Explore this post and more from the MachineLearning community
#career
https://www.microsoft.com/en-us/research/blog/ai4science-to-empower-the-fifth-paradigm-of-scientific-discovery/
https://www.microsoft.com/en-us/research/blog/ai4science-to-empower-the-fifth-paradigm-of-scientific-discovery/
Microsoft Research
AI4Science to empower the fifth paradigm of scientific discovery - Microsoft Research
Editor’s note, Oct. 20, 2023 – The post was updated to remove information related to the Amsterdam lab, as those details have since changed. Over the coming decade, deep learning looks set to have a transformational impact on the natural sciences. The consequences…
#ml
I was playing with dalle-mini ( https://github.com/borisdayma/dalle-mini ).
So... in the eyes of Dalle-mini,
1. science == chemistry (? I guess),
2. scientists are men.
Tried several times, same conclusions.
It is so hard to fight against the bias in ML models.
---
Update: OpenAI is fixing this.
https://openai.com/blog/reducing-bias-and-improving-safety-in-dall-e-2/
I was playing with dalle-mini ( https://github.com/borisdayma/dalle-mini ).
So... in the eyes of Dalle-mini,
1. science == chemistry (? I guess),
2. scientists are men.
Tried several times, same conclusions.
It is so hard to fight against the bias in ML models.
---
Update: OpenAI is fixing this.
https://openai.com/blog/reducing-bias-and-improving-safety-in-dall-e-2/
#ml
The recommended readings serve as a good curriculum for transformers.
https://web.stanford.edu/class/cs25/index.html#course
The recommended readings serve as a good curriculum for transformers.
https://web.stanford.edu/class/cs25/index.html#course
Stanford CS25
CS25: Tranformers United!
Disussing the latest breakthroughs with Transformers in diverse domains
#ml
https://arxiv.org/abs/2205.02302
Kreuzberger D, Kühl N, Hirschl S. Machine Learning Operations (MLOps): Overview, definition, and architecture. arXiv [csLG]. 2022 [cited 17 Jul 2022]. doi:10.48550/ARXIV.2205.02302
https://arxiv.org/abs/2205.02302
Kreuzberger D, Kühl N, Hirschl S. Machine Learning Operations (MLOps): Overview, definition, and architecture. arXiv [csLG]. 2022 [cited 17 Jul 2022]. doi:10.48550/ARXIV.2205.02302
arXiv.org
Machine Learning Operations (MLOps): Overview, Definition, and Architecture
The final goal of all industrial machine learning (ML) projects is to develop ML products and rapidly bring them into production. However, it is highly challenging to automate and operationalize...
#python
Guidelines for research coding. It is not the highest standard but is easy to follow.
https://goodresearch.dev/
Guidelines for research coding. It is not the highest standard but is easy to follow.
https://goodresearch.dev/
goodresearch.dev
The Good Research Code Handbook
This handbook is for grad students, postdocs and PIs who do a lot of programming as part of their research. It will teach you, in a practical manner, how to organize your code so that it is easy to understand and works reliably.
#career
> so the job of data scientist will only continue to grow in its importance in the business landscape.
>
> However, it will also continue to change. We expect to see continued differentiation of responsibilities and roles that all once fell under the data scientist category.
https://hbr.org/2022/07/is-data-scientist-still-the-sexiest-job-of-the-21st-century
> so the job of data scientist will only continue to grow in its importance in the business landscape.
>
> However, it will also continue to change. We expect to see continued differentiation of responsibilities and roles that all once fell under the data scientist category.
https://hbr.org/2022/07/is-data-scientist-still-the-sexiest-job-of-the-21st-century
Harvard Business Review
Is Data Scientist Still the Sexiest Job of the 21st Century?
Ten years ago, the authors posited that being a data scientist was the “sexiest job of the 21st century.” A decade later, does the claim stand up? The job has grown in popularity and is generally well-paid, and the field is projected to experience more growth…
#ml
Fotios Petropoulos initiated the forecasting encyclopaedia project. They published this paper recently.
Petropoulos, Fotios, Daniele Apiletti, Vassilios Assimakopoulos, Mohamed Zied Babai, Devon K. Barrow, Souhaib Ben Taieb, Christoph Bergmeir, et al. 2022. “Forecasting: Theory and Practice.” International Journal of Forecasting 38 (3): 705–871.
https://www.sciencedirect.com/science/article/pii/S0169207021001758
Also available here: https://forecasting-encyclopedia.com/
The paper covers many recent advances in forecasting, including deep learning models. There are some important topics missing but I’m sure they will cover them in future releases.
Fotios Petropoulos initiated the forecasting encyclopaedia project. They published this paper recently.
Petropoulos, Fotios, Daniele Apiletti, Vassilios Assimakopoulos, Mohamed Zied Babai, Devon K. Barrow, Souhaib Ben Taieb, Christoph Bergmeir, et al. 2022. “Forecasting: Theory and Practice.” International Journal of Forecasting 38 (3): 705–871.
https://www.sciencedirect.com/science/article/pii/S0169207021001758
Also available here: https://forecasting-encyclopedia.com/
The paper covers many recent advances in forecasting, including deep learning models. There are some important topics missing but I’m sure they will cover them in future releases.
Forecasting-Encyclopedia
Forecasting: theory and practice
#fun
> participants who spent more than six hours working on a tedious and mentally taxing assignment had higher levels of glutamate — an important signalling molecule in the brain. Too much glutamate can disrupt brain function, and a rest period could allow the brain to restore proper regulation of the molecule
https://www.nature.com/articles/d41586-022-02161-5
> participants who spent more than six hours working on a tedious and mentally taxing assignment had higher levels of glutamate — an important signalling molecule in the brain. Too much glutamate can disrupt brain function, and a rest period could allow the brain to restore proper regulation of the molecule
https://www.nature.com/articles/d41586-022-02161-5
Nature
Why thinking hard makes us feel tired
Nature - Difficult tasks can lead to build-up of a signalling molecule in the brain, triggering fatigue.
#fun
I became a beta tester of DALLE. Played with it for a while and it is quite fun. See the comments for some examples.
Comment if you would like to test some prompts.
I became a beta tester of DALLE. Played with it for a while and it is quite fun. See the comments for some examples.
Comment if you would like to test some prompts.
#ml
https://ai.googleblog.com/2022/08/optformer-towards-universal.html?m=1
I find this work counter intuitive. They took some descriptions of the optimization in machine learning and trained a transformer to "guesstimate" the hyperparameters of a model.
I understand that human being has some "feeling" of the hyperparameters after working with the data and model for a while. But it is usually hard to extrapolate such knowledge when we have completely new data and models.
I guess our brain is doing some statistics based on our historical experiments. And we call this intuition. My "intuition" is that there is little generalizable knowledge in this problem. 🙈 It would have been so great if they investigated the saliency maps.
https://ai.googleblog.com/2022/08/optformer-towards-universal.html?m=1
I find this work counter intuitive. They took some descriptions of the optimization in machine learning and trained a transformer to "guesstimate" the hyperparameters of a model.
I understand that human being has some "feeling" of the hyperparameters after working with the data and model for a while. But it is usually hard to extrapolate such knowledge when we have completely new data and models.
I guess our brain is doing some statistics based on our historical experiments. And we call this intuition. My "intuition" is that there is little generalizable knowledge in this problem. 🙈 It would have been so great if they investigated the saliency maps.
blog.research.google
OptFormer: Towards Universal Hyperparameter Optimization with Transformers
#visualization
Hmm not so many contributions from wild animals.
Source: https://www.weforum.org/agenda/2021/08/total-biomass-weight-species-earth
Data from this paper: https://www.pnas.org/doi/10.1073/pnas.1711842115#T1
Hmm not so many contributions from wild animals.
Source: https://www.weforum.org/agenda/2021/08/total-biomass-weight-species-earth
Data from this paper: https://www.pnas.org/doi/10.1073/pnas.1711842115#T1
#fun
Some results from the stable difussion model. See comments for some examples.
https://huggingface.co/CompVis/stable-diffusion
Some results from the stable difussion model. See comments for some examples.
https://huggingface.co/CompVis/stable-diffusion
huggingface.co
CompVis/stable-diffusion · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.