Offshore
Photo
Dreaming Tulpa π₯π
I'm gonna make him a blessing he can't refuse. #aiia #aiart #midjourney https://t.co/aRpLwxipfF
tweet
I'm gonna make him a blessing he can't refuse. #aiia #aiart #midjourney https://t.co/aRpLwxipfF
tweet
Dave Craige
RT @HustleGPT: π https://t.co/KQS0eVrJ5r
tweet
RT @HustleGPT: π https://t.co/KQS0eVrJ5r
Working on the #HustleGPT website! π
Making some progress. We will put the βFocus Documentβ right on the homepage so it is super clear.
/ @HustleGPT https://t.co/OJ5WZ4D5vg - Dave Craigetweet
Twitter
Working on the #HustleGPT website! π
Making some progress. We will put the βFocus Documentβ right on the homepage so it is super clear.
/ @HustleGPT
Making some progress. We will put the βFocus Documentβ right on the homepage so it is super clear.
/ @HustleGPT
Robin Hanson
"What I do know is that the biggest danger right now - manifesting before our very eyes - is the hysteria and unwise gestures β¦ calling for a futile, counterproductive moratorium β an unenforceable 'training pause.'β https://t.co/pFMdihYeCV
tweet
"What I do know is that the biggest danger right now - manifesting before our very eyes - is the hysteria and unwise gestures β¦ calling for a futile, counterproductive moratorium β an unenforceable 'training pause.'β https://t.co/pFMdihYeCV
Just posted, I react to a hysteria-drenched 'petition' by over a hundred of the most highly-reputed folks in AI and related fields, demanding a 6-month 'pause' in training new 'GPT' systems. There are far better, that could actually work.
https://t.co/VnaEOAHrSq - πππ―π’π ππ«π’π§tweet
Twitter
Just posted, I react to a hysteria-drenched 'petition' by over a hundred of the most highly-reputed folks in AI and related fields, demanding a 6-month 'pause' in training new 'GPT' systems. There are far better, that could actually work.
https://t.co/VnaEOAHrSq
https://t.co/VnaEOAHrSq
Carlos E. Perez
For those fine-tuning smaller LLMs, what do you do to guide the training in an orthogonal direction to what GPT-4 can do?
tweet
For those fine-tuning smaller LLMs, what do you do to guide the training in an orthogonal direction to what GPT-4 can do?
tweet
Twitter
For those fine-tuning smaller LLMs, what do you do to guide the training in an orthogonal direction to what GPT-4 can do?
Dave Craige
RT @HustleGPT: βοΈ soon https://t.co/pKiDlpj3OH
tweet
RT @HustleGPT: βοΈ soon https://t.co/pKiDlpj3OH
*designing the new /support-us page of the #HustleGPT site where you can buy a $3 coffee to help us donate to the mod team βοΈ https://t.co/4GCl5qPiKX - Dave Craigetweet
Twitter
*designing the new /support-us page of the #HustleGPT site where you can buy a $3 coffee to help us donate to the mod team βοΈ
Dave Craige
RT @HernanArber: @davecraige @HustleGPT Loving it so far ππ» Thanks for the invite ππ»
tweet
RT @HernanArber: @davecraige @HustleGPT Loving it so far ππ» Thanks for the invite ππ»
tweet
Twitter
RT @HernanArber: @davecraige @HustleGPT Loving it so far ππ» Thanks for the invite ππ»
Robin Hanson
Consider two ways to hold agents accountable to you: PROSPECTIVE: many agents submit proposals re promised future behaviors, & you pick one; RETROSPECTIVE: incumbent agent behaves, you see results, then retain or replace w/ new agent. Which way do you prefer?
tweet
Consider two ways to hold agents accountable to you: PROSPECTIVE: many agents submit proposals re promised future behaviors, & you pick one; RETROSPECTIVE: incumbent agent behaves, you see results, then retain or replace w/ new agent. Which way do you prefer?
tweet
Twitter
Consider two ways to hold agents accountable to you: PROSPECTIVE: many agents submit proposals re promised future behaviors, & you pick one; RETROSPECTIVE: incumbent agent behaves, you see results, then retain or replace w/ new agent. Which way do you prefer?