Am Neumarkt 😱
286 subscribers
88 photos
3 videos
17 files
513 links
Machine learning and other gibberish
Archives: https://datumorphism.leima.is/amneumarkt/
Download Telegram
#dl

https://github.com/Lightning-AI/lightning/releases/tag/2.0.0

You can compile (torch 2.0) LightningModule now.

import torch
import lightning as L
model = LitModel()
# This will compile forward and {training,validation,test,predict}_step
compiled_model = torch.compile(model)
trainer = L.Trainer()
trainer.fit(compiled_model)
#misc

This is how generative AI is changing our lives. Now thinking about it, those competitive advantages from our satisfying technical skills are fading away.

What shall we invest into for a better career? Just integrated whatever is coming into our workflow? Or fundamentally change the way we are thinking?
#ml

Pérez J, Barceló P, Marinkovic J. Attention is Turing-Complete. J Mach Learn Res. 2021;22: 1–35. Available: https://jmlr.org/papers/v22/20-302.html
#dl

I am experimenting with torch 2.0 and searching for potential training time improvements in lightning. The following article provides a very good introduction.

https://lightning.ai/pages/community/tutorial/how-to-speed-up-pytorch-model-training/
#ai

Generate complicated 3D models by jotting down your ideas:

https://opus.ai/demo
#ai

The performance is not too bad. But…given this is about academic topics, it sounds terrible to have this level of hallucination.

https://bair.berkeley.edu/blog/2023/04/03/koala/
#ts

I love the last paragraph, especially this sentence:
> Unfortunately, I can’t continue my debate with Clive Granger. I rather hoped he would come to accept my point of view.

Rob J Hyndman - The difference between prediction intervals and confidence intervals
https://robjhyndman.com/hyndsight/intervals/
#😱
#academia

Data science weekly mentioned this paper.
https://arxiv.org/abs/2304.06035


Quote from the abstract:

> A growing number of AI academics can no longer find the means and resources to compete at a global scale. This is a somewhat recent phenomenon, but an accelerating one, with private actors investing enormous compute resources into cutting edge AI research.

At first, a thought it's an April fools day paper. But it seems serious.
For example, the author mentioned the strategy "Analysis Instead of Synthesis". This has already happened to many fields. Global scale and money burning experiments in physics left many teams no choice but to take other teams’ data and analyze them.


This is actually quite crazy. Thinking about how AI/ML is developing, it's almost like a paradigm shift of research.
I read about a discussion on Reddit on a similar topic. Some people are concerned that medical research is also gonna shift to the private sector because of AI, leaving many people no choice but to join these big medical corporates.

On the other hand, requirements of computing resources also made smaller companies hard to compete in some fields. We need such a guide for business too.
#ai

AI Frontiers: AI for health and the future of research with Peter Lee

https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5ibHVicnJ5LmNvbS9mZWVkcy9taWNyb3NvZnRyZXNlYXJjaC54bWw/episode/aHR0cHM6Ly9ibHVicnJ5LmNvbS9taWNyb3NvZnRyZXNlYXJjaC85NTE3NTgwMC9haS1mcm9udGllcnMtYWktZm9yLWhlYWx0aC1hbmQtdGhlLWZ1dHVyZS1vZi1yZXNlYXJjaC13aXRoLXBldGVyLWxlZS8?ep=14

----

A very cool discussion on the topic of large language models.

They mentioned the early stage test of Davinci from OpenAI. The model was able to reason for AP in biology and many of the reasoning was surprising to them. Then Ashley asked the person from OpenAI why is Davinci reason like that and the person replied they don't know.

Not everyone expected that kind of reasoning in LLM. In hindsight, "It is just a language model" is a very good question. Nowadays with GPT models, it seems that this question is not a question anymore because it is becoming a fact. What is in the training texts and what is language? Karpathy even made a joke about this:
> The hottest new programming language is English
https://twitter.com/karpathy/status/1617979122625712128?lang=en
#coding

I had some discussions with serval people about writing good code during machine learning experimentation.

Whenever it comes to the part of writing formal code, opinions diverge. So, should we write good code that is easy to read with typing and tests, even in experiments?

The spirit of experimentation is fast and reliable. So naturally, the question comes down to what kind of coding style allows us to develop and run experiments, fast.

My experience with running experiments is that we will never run the code just once. Instead, we always come back to it and run it with different configurations or parameters. In this circumstance, how good shall my code be?

For typing and tests, I type most of my args but only write tests needed to develop and debug a function or class.
- Typing is important because people spend time figuring out what to put in there as an argument for a function. With typing, it is much faster.
- Here is an example for tests: If I need to know the shape of the tensor deep in a method of a class, I would spend some seconds writing a simple test that allows me to put breakpoints in the method to investigate inside.

But, the above is a bit trivial. How about the design of the functions and classes? I suggest taking your time writing those that are repeated in every experiment. We will hit some ceiling in development speed real quick, if we always use the first and most naive design for these. In practice, I would say, design it twice and write it once.
One such example is data preprocessing. When dealing with the same data and problems, data transformations are usually quite similar in each experiment but a bit different in details. Finding the patterns and writing some slightly generic functions would be helpful. There is always the risk of over-engineering. I prefer to improve things little by little. I might generalize a function a little bit in one experiment. And also, don't hesitate to throw away your code to rewrite. Rewriting will take little time, and it usually brings in improvements.

That's my five cents on code quality for developing and running machine learning experiments.