#tool
Read on reddit about this but never really looked into details. It is actually amazing.
Just watch the video in readme.
https://github.com/Significant-Gravitas/Auto-GPT
Read on reddit about this but never really looked into details. It is actually amazing.
Just watch the video in readme.
https://github.com/Significant-Gravitas/Auto-GPT
GitHub
GitHub - Significant-Gravitas/AutoGPT: AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission…
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters. - Significant-Gravitas/AutoGPT
#academia
Data science weekly mentioned this paper.
https://arxiv.org/abs/2304.06035
Quote from the abstract:
> A growing number of AI academics can no longer find the means and resources to compete at a global scale. This is a somewhat recent phenomenon, but an accelerating one, with private actors investing enormous compute resources into cutting edge AI research.
At first, a thought it's an April fools day paper. But it seems serious.
For example, the author mentioned the strategy "Analysis Instead of Synthesis". This has already happened to many fields. Global scale and money burning experiments in physics left many teams no choice but to take other teams’ data and analyze them.
This is actually quite crazy. Thinking about how AI/ML is developing, it's almost like a paradigm shift of research.
I read about a discussion on Reddit on a similar topic. Some people are concerned that medical research is also gonna shift to the private sector because of AI, leaving many people no choice but to join these big medical corporates.
On the other hand, requirements of computing resources also made smaller companies hard to compete in some fields. We need such a guide for business too.
Data science weekly mentioned this paper.
https://arxiv.org/abs/2304.06035
Quote from the abstract:
> A growing number of AI academics can no longer find the means and resources to compete at a global scale. This is a somewhat recent phenomenon, but an accelerating one, with private actors investing enormous compute resources into cutting edge AI research.
At first, a thought it's an April fools day paper. But it seems serious.
For example, the author mentioned the strategy "Analysis Instead of Synthesis". This has already happened to many fields. Global scale and money burning experiments in physics left many teams no choice but to take other teams’ data and analyze them.
This is actually quite crazy. Thinking about how AI/ML is developing, it's almost like a paradigm shift of research.
I read about a discussion on Reddit on a similar topic. Some people are concerned that medical research is also gonna shift to the private sector because of AI, leaving many people no choice but to join these big medical corporates.
On the other hand, requirements of computing resources also made smaller companies hard to compete in some fields. We need such a guide for business too.
#ai
AI Frontiers: AI for health and the future of research with Peter Lee
https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5ibHVicnJ5LmNvbS9mZWVkcy9taWNyb3NvZnRyZXNlYXJjaC54bWw/episode/aHR0cHM6Ly9ibHVicnJ5LmNvbS9taWNyb3NvZnRyZXNlYXJjaC85NTE3NTgwMC9haS1mcm9udGllcnMtYWktZm9yLWhlYWx0aC1hbmQtdGhlLWZ1dHVyZS1vZi1yZXNlYXJjaC13aXRoLXBldGVyLWxlZS8?ep=14
----
A very cool discussion on the topic of large language models.
They mentioned the early stage test of Davinci from OpenAI. The model was able to reason for AP in biology and many of the reasoning was surprising to them. Then Ashley asked the person from OpenAI why is Davinci reason like that and the person replied they don't know.
Not everyone expected that kind of reasoning in LLM. In hindsight, "It is just a language model" is a very good question. Nowadays with GPT models, it seems that this question is not a question anymore because it is becoming a fact. What is in the training texts and what is language? Karpathy even made a joke about this:
> The hottest new programming language is English
https://twitter.com/karpathy/status/1617979122625712128?lang=en
AI Frontiers: AI for health and the future of research with Peter Lee
https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5ibHVicnJ5LmNvbS9mZWVkcy9taWNyb3NvZnRyZXNlYXJjaC54bWw/episode/aHR0cHM6Ly9ibHVicnJ5LmNvbS9taWNyb3NvZnRyZXNlYXJjaC85NTE3NTgwMC9haS1mcm9udGllcnMtYWktZm9yLWhlYWx0aC1hbmQtdGhlLWZ1dHVyZS1vZi1yZXNlYXJjaC13aXRoLXBldGVyLWxlZS8?ep=14
----
A very cool discussion on the topic of large language models.
They mentioned the early stage test of Davinci from OpenAI. The model was able to reason for AP in biology and many of the reasoning was surprising to them. Then Ashley asked the person from OpenAI why is Davinci reason like that and the person replied they don't know.
Not everyone expected that kind of reasoning in LLM. In hindsight, "It is just a language model" is a very good question. Nowadays with GPT models, it seems that this question is not a question anymore because it is becoming a fact. What is in the training texts and what is language? Karpathy even made a joke about this:
> The hottest new programming language is English
https://twitter.com/karpathy/status/1617979122625712128?lang=en
Google Podcasts
Microsoft Research Podcast - 137 - AI Frontiers: AI for health and the future of research with Peter Lee
Powerful new large-scale AI models like GPT-4 are showing dramatic improvements in reasoning, problem-solving, and language capabilities. This marks a phase change for artificial intelligence—and a signal of accelerating progress to come.
In this new Microsoft…
In this new Microsoft…
#misc
I was working and didn't have time watching the live streaming when they launched the starship. After some Twitter browsing, I have to say, this thing is beautiful.
https://twitter.com/nextspaceflight/status/1649052544755470338
I was working and didn't have time watching the live streaming when they launched the starship. After some Twitter browsing, I have to say, this thing is beautiful.
https://twitter.com/nextspaceflight/status/1649052544755470338
Twitter
My Autotrack software captures the moment that Starship lost control. Excitement was very much guaranteed. Great first attempt by the SpaceX team!
Tune in to hear our live reaction! @NASASpaceflight
https://t.co/uutBwWSABz
Tune in to hear our live reaction! @NASASpaceflight
https://t.co/uutBwWSABz
#coding
I had some discussions with serval people about writing good code during machine learning experimentation.
Whenever it comes to the part of writing formal code, opinions diverge. So, should we write good code that is easy to read with typing and tests, even in experiments?
The spirit of experimentation is fast and reliable. So naturally, the question comes down to what kind of coding style allows us to develop and run experiments, fast.
My experience with running experiments is that we will never run the code just once. Instead, we always come back to it and run it with different configurations or parameters. In this circumstance, how good shall my code be?
For typing and tests, I type most of my args but only write tests needed to develop and debug a function or class.
- Typing is important because people spend time figuring out what to put in there as an argument for a function. With typing, it is much faster.
- Here is an example for tests: If I need to know the shape of the tensor deep in a method of a class, I would spend some seconds writing a simple test that allows me to put breakpoints in the method to investigate inside.
But, the above is a bit trivial. How about the design of the functions and classes? I suggest taking your time writing those that are repeated in every experiment. We will hit some ceiling in development speed real quick, if we always use the first and most naive design for these. In practice, I would say, design it twice and write it once.
One such example is data preprocessing. When dealing with the same data and problems, data transformations are usually quite similar in each experiment but a bit different in details. Finding the patterns and writing some slightly generic functions would be helpful. There is always the risk of over-engineering. I prefer to improve things little by little. I might generalize a function a little bit in one experiment. And also, don't hesitate to throw away your code to rewrite. Rewriting will take little time, and it usually brings in improvements.
That's my five cents on code quality for developing and running machine learning experiments.
I had some discussions with serval people about writing good code during machine learning experimentation.
Whenever it comes to the part of writing formal code, opinions diverge. So, should we write good code that is easy to read with typing and tests, even in experiments?
The spirit of experimentation is fast and reliable. So naturally, the question comes down to what kind of coding style allows us to develop and run experiments, fast.
My experience with running experiments is that we will never run the code just once. Instead, we always come back to it and run it with different configurations or parameters. In this circumstance, how good shall my code be?
For typing and tests, I type most of my args but only write tests needed to develop and debug a function or class.
- Typing is important because people spend time figuring out what to put in there as an argument for a function. With typing, it is much faster.
- Here is an example for tests: If I need to know the shape of the tensor deep in a method of a class, I would spend some seconds writing a simple test that allows me to put breakpoints in the method to investigate inside.
But, the above is a bit trivial. How about the design of the functions and classes? I suggest taking your time writing those that are repeated in every experiment. We will hit some ceiling in development speed real quick, if we always use the first and most naive design for these. In practice, I would say, design it twice and write it once.
One such example is data preprocessing. When dealing with the same data and problems, data transformations are usually quite similar in each experiment but a bit different in details. Finding the patterns and writing some slightly generic functions would be helpful. There is always the risk of over-engineering. I prefer to improve things little by little. I might generalize a function a little bit in one experiment. And also, don't hesitate to throw away your code to rewrite. Rewriting will take little time, and it usually brings in improvements.
That's my five cents on code quality for developing and running machine learning experiments.
#misc
‘The Godfather of AI’ Quits Google and Warns of Danger Ahead - The New York Times
https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html
‘The Godfather of AI’ Quits Google and Warns of Danger Ahead - The New York Times
https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html
NY Times
‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead
For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.
#ai
New Bing can deal with this kind of imposed misinformation. The references can be used to confirm the answer. This is more reliable than chatGPT.
New Bing can deal with this kind of imposed misinformation. The references can be used to confirm the answer. This is more reliable than chatGPT.
#ml
Yeh, Catherine, Yida Chen, Aoyu Wu, Cynthia Chen, Fernanda Viégas, and Martin Wattenberg. 2023. “AttentionViz: A Global View of Transformer Attention.” ArXiv [Cs.HC]. arXiv. http://arxiv.org/abs/2305.03210.
Yeh, Catherine, Yida Chen, Aoyu Wu, Cynthia Chen, Fernanda Viégas, and Martin Wattenberg. 2023. “AttentionViz: A Global View of Transformer Attention.” ArXiv [Cs.HC]. arXiv. http://arxiv.org/abs/2305.03210.
#timeseries
Finding a suitable forecasting metric to evaluate the forecasting models is often the key to a forecasting project. Right? We use metrics when developing models, we also use metrics to monitor models.
There are a bunch of metrics people choose from or adapt from. To be faster when choosing and adapting metrics, I created a page on the properties of different metrics for time series forecasting problems. For reproducibility, I also included all the code used to write this page.
https://dl.leima.is/time-series/timeseries-metrics.forecasting/
Finding a suitable forecasting metric to evaluate the forecasting models is often the key to a forecasting project. Right? We use metrics when developing models, we also use metrics to monitor models.
There are a bunch of metrics people choose from or adapt from. To be faster when choosing and adapting metrics, I created a page on the properties of different metrics for time series forecasting problems. For reproducibility, I also included all the code used to write this page.
https://dl.leima.is/time-series/timeseries-metrics.forecasting/
dl.leima.is
Time Series Forecasting Metrics - Time Series with Deep Learning Quick Bite
#visualization
Demographic projection for Germany
https://service.destatis.de/bevoelkerungspyramide/index.html#!y=2023&v=4&l=en&g
Demographic projection for Germany
https://service.destatis.de/bevoelkerungspyramide/index.html#!y=2023&v=4&l=en&g
service.destatis.de
Bevölkerungspyramide: Altersstruktur Deutschlands von 1950 - 2070
Die Bevölkerungspyramide (Alterspyramide) des Statistischen Bundesamtes zeigt die Entwicklung der Altersstruktur Deutschlands von 1950 bis 2070
#visualization
This is a great post. I certainly don’t agree with the author that this is “the greatest statistical graphics ever created”. But I can’t prove myself either.
Also, just like coding, “design it twice (or even more times)” is a great way to produce the best charts. That being said, we should keep making different versions and start to compare them with each other.
https://nightingaledvs.com/defying-chart-design-rules-for-clearer-data-insights/
This is a great post. I certainly don’t agree with the author that this is “the greatest statistical graphics ever created”. But I can’t prove myself either.
Also, just like coding, “design it twice (or even more times)” is a great way to produce the best charts. That being said, we should keep making different versions and start to compare them with each other.
https://nightingaledvs.com/defying-chart-design-rules-for-clearer-data-insights/
Nightingale
Defying Chart Design Rules for Clearer Data Insights
Sir David MacKay, a UK scientist, challenged conventional chart design and created an insightful data visualization about sustainable energy.
#forecasting
Created some sections on forecasting with trees. This first draft provides some first steps to applying trees to forecasting problems as well as some useful theories about tree-based models.
https://dl.leima.is/trees/tree.basics/
https://dl.leima.is/notebooks/tree_darts_random_forest/
Created some sections on forecasting with trees. This first draft provides some first steps to applying trees to forecasting problems as well as some useful theories about tree-based models.
https://dl.leima.is/trees/tree.basics/
https://dl.leima.is/notebooks/tree_darts_random_forest/
#ml
Yes, Transformers are Effective for Time Series Forecasting (+ Autoformer)
https://huggingface.co/blog/autoformer
Yes, Transformers are Effective for Time Series Forecasting (+ Autoformer)
https://huggingface.co/blog/autoformer
huggingface.co
Yes, Transformers are Effective for Time Series Forecasting (+ Autoformer)
We’re on a journey to advance and democratize artificial intelligence through open source and open science.