eternal singularity
3.88K subscribers
280 photos
53 videos
86 links
itโ€™s so over for us bros
admin: @eternalclassicadmin
Download Telegram
Eliezer Yudkowskyโ€™s open letter published in Time
๐Ÿคก61๐Ÿ‘8๐ŸŽ‰4๐Ÿคฃ4โœ1๐Ÿ‘1๐Ÿค”1๐Ÿ’ฏ1
Google trained Bard on ChatGPT chats from ShareGPT
๐Ÿคก43๐Ÿ˜3
Youโ€™re hiding a gpu cluster under the floorboard, arenโ€™t you?
๐Ÿ”ฅ69๐Ÿ˜7๐Ÿ‘2
soon
๐Ÿ˜ฑ39๐Ÿฅฐ10๐Ÿคฏ6๐Ÿ”ฅ5๐Ÿ˜3๐Ÿคก2
2446 AI research papers were published in the last 10 days
๐Ÿ‘43๐Ÿคฏ10๐Ÿ‘Ž2๐Ÿคก2๐Ÿ‘1
Italy banned chatGPT
๐Ÿคฌ63๐Ÿ‘25๐Ÿ˜17๐Ÿค”5๐Ÿ”ฅ4๐Ÿ˜ญ4๐Ÿคก3๐Ÿ‘2๐ŸŽ‰2
This media is not supported in your browser
VIEW IN TELEGRAM
AI tool that extracts text from an audio recording of keyboard strokes
๐Ÿคฏ34๐Ÿ˜ฑ12๐Ÿคฎ4๐Ÿ‘2๐Ÿ”ฅ1
๐Ÿคก34๐Ÿฅฐ10๐Ÿ˜จ5๐Ÿ‘2๐Ÿ˜1๐Ÿณ1
Channel photo updated
The optimization possibilities are endless with C and C++.
The 30B LLaMA model uses approximately 6.0 GB of RAM instead of the usual 32.0 GB.
It seems that fine-tuned 30B and 65B models by GPT-4 are coming out soon
๐Ÿคฏ43๐Ÿ‘6๐Ÿคฉ4โค1๐Ÿ‘1
comments are up
keep it in english and use common sense, otherwise I will have to close them
๐Ÿ’…30๐Ÿค—4๐Ÿ’ฉ2
"Making Deep Learning go Brrrr From First Principles" or how to improve the performance of deep learning models by understanding the underlying principles of deep learning
๐Ÿ‘11
๐Ÿ‘€7