Forwarded from Empty Set of Ideas (Proof:)
Nonlinear_change_processes_and_the_emergence_of_suicidal_behavior.pdf
1004.8 KB
Observation of large scale precursor correlations between cosmic rays and earthquakes
Abstract:
The search for correlations between secondary cosmic ray detection rates and seismic effects has long been a subject of investigation motivated by the hope of identifying a new precursor type that could feed a global early warning system against earthquakes. Here we show for the first time that the average variation of the cosmic ray detection rates correlates with the global seismic activity to be observed with a time lag of approximately two weeks, and that the significance of the effect varies with a periodicity resembling the undecenal solar cycle, with a shift in phase of around three years, exceeding 6 sigma at local maxima. The precursor characteristics of the observed correlations point to a pioneer perspective of an early warning system against earthquakes.
https://arxiv.org/abs/2204.12310
https://arxiv.org/ftp/arxiv/papers/2204/2204.12310.pdf
Abstract:
The search for correlations between secondary cosmic ray detection rates and seismic effects has long been a subject of investigation motivated by the hope of identifying a new precursor type that could feed a global early warning system against earthquakes. Here we show for the first time that the average variation of the cosmic ray detection rates correlates with the global seismic activity to be observed with a time lag of approximately two weeks, and that the significance of the effect varies with a periodicity resembling the undecenal solar cycle, with a shift in phase of around three years, exceeding 6 sigma at local maxima. The precursor characteristics of the observed correlations point to a pioneer perspective of an early warning system against earthquakes.
https://arxiv.org/abs/2204.12310
https://arxiv.org/ftp/arxiv/papers/2204/2204.12310.pdf
👍3
Forwarded from Цуберок 🇺🇦 #УкрТґ
Photonics experiment resolves quantum paradox
https://phys.org/news/2023-07-photonics-quantum-paradox.html
https://phys.org/news/2023-07-photonics-quantum-paradox.html
phys.org
Photonics experiment resolves quantum paradox
It seems quantum mechanics and thermodynamics cannot be true simultaneously. In a new publication, UT researchers use photons in an optical chip to demonstrate how both theories can be true at the same ...
🔥1
Forwarded from Just links
Self-Consuming Generative Models Go MAD https://arxiv.org/abs/2307.01850
👎1
Forwarded from cyberchaos & mathюки
CERN Ukrainian REMOTE Student Program:
скинули в Slack щойно цю пропозицію, запостили на сайті сьогодні, спробуйте якнайшвидше податися, за 4 тижні будуть починати відбирати людей. платять гарно, задачі цікаві, все за посиланням, це дуже гарна пропозиція (і потрапити реяльніше, ніж ви могли б подумати)
https://jobs.smartrecruiters.com/CERN/743999917982762-ukrainian-remote-student-program
скинули в Slack щойно цю пропозицію, запостили на сайті сьогодні, спробуйте якнайшвидше податися, за 4 тижні будуть починати відбирати людей. платять гарно, задачі цікаві, все за посиланням, це дуже гарна пропозиція (і потрапити реяльніше, ніж ви могли б подумати)
https://jobs.smartrecruiters.com/CERN/743999917982762-ukrainian-remote-student-program
CERN
CERN is looking for a Ukrainian Remote Student Program in Geneva, Switzerland
You will contribute to an advanced technical project in an experimental physics or engineering team for a period of 8 to 13 weeks;A report on your work and project will be expected at the end of t...
🔥1
https://twitter.com/LukeGessler/status/1679211291292889100
Gzip and KNN Outperforms Transformers on Text Classification
Gzip and KNN Outperforms Transformers on Text Classification
X (formerly Twitter)
Luke Gessler (@LukeGessler) on X
this paper's nuts. for sentence classification on out-of-domain datasets, all neural (Transformer or not) approaches lose to good old kNN on representations generated by.... gzip https://t.co/6eZiXlJxOX
∅
https://www.youtube.com/watch?v=Qg3XOfioapI
https://www.youtube.com/watch?v=MqZgoNRERY8
old, but somwhat related
old, but somwhat related
YouTube
Code Golf & the Bitshift Variations - Computerphile
Thanks to Audible for supporting our channel. Get a free 30 day trial at http://www.audible.com/Computerphile
A short jumble of letters & symbols that plays a long, musical tune? This is code Golf and Rob Miles' musical composition: "The Bitshift Variations…
A short jumble of letters & symbols that plays a long, musical tune? This is code Golf and Rob Miles' musical composition: "The Bitshift Variations…
∅
https://www.youtube.com/watch?v=MqZgoNRERY8 old, but somwhat related
echo "g(i,x,t,o){return((3&x&(i*((3&i>>16?\"BY}6YB6%\":\"Qj}6jQ6%\")[t%8]+51)>>o))<<4);};main(i,n,s){for(i=0;;i++)putchar(g(i,1,n=i>>14,12)+g(i,s=i>>17,n^i>>13,10)+g(i,s/3,n+((i>>11)%3),10)+g(i,s/5,8+n-((i>>10)%3),9));}"|gcc -xc -&&./a.out|aplay👍1
Forwarded from Artificial Intelligence
FLASK: Fine-grained Language Model Evaluation Based on Alignment Skill Sets
🖥 Github: https://github.com/kaistai/flask
⏩ Paper: https://arxiv.org/pdf/2307.10928v1.pdf
💨 Dataset: https://paperswithcode.com/dataset/gsm8k
@ArtificialIntelligencedl
@ArtificialIntelligencedl
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥1
Forwarded from Empty Set of Ideas (Arsenii)
The paradox of the efficient code and the neural Tower of Babel
«A pervasive metaphor in neuroscience is the idea that neurons “encode” stuff: some neurons encode pain; others encode the location of a sound; maybe a population of neurons encode some other property of objects. What does this mean? In essence, that there is a correspondence between some objective property and neural activity: when I feel pain, this neuron spikes; or, the image I see is “represented” in the firing of visual cortical neurons. The mapping between the objective properties and neural activity is the “code”. How insightful is this metaphor?
An encoded message is understandable to the extent that the reader knows the code. But the problem with applying this metaphor to the brain is only the encoded message is communicated, not the code, and not the original message. Mathematically, original message = encoded message + code, but only one term is communicated. This could still work if there were a universal code that we could assume all neurons can read, the “language of neurons”, or if somehow some information about the code could be gathered from the encoded messages themselves.
Unfortunately, this is in contradiction with the main paradigm in neural coding theory, “efficient coding”.
The efficient coding hypothesis stipulates that neurons encode signals into spike trains in an efficient way, that is, it uses a code such that all redundancy is removed from the original message while preserving information, in the sense that the encoded message can be mapped back to the original message (Barlow, 1961; Simoncelli, 2003). This implies that with a perfectly efficient code, encoded messages are undistinguishable from random. Since the code is determined on the statistics of the inputs and only the encoded messages are communicated, a code is efficient to the extent that it is not understandable by the receiver. This is the paradox of the efficient code.
In the neural coding metaphor, the code is private and specific to each neuron. If we follow this metaphor, this means that all neurons speak a different language, a language that allows expressing concepts very concisely but that no one else can understand. Thus, according to the coding metaphor, the brain is a Tower of Babel.»
«A pervasive metaphor in neuroscience is the idea that neurons “encode” stuff: some neurons encode pain; others encode the location of a sound; maybe a population of neurons encode some other property of objects. What does this mean? In essence, that there is a correspondence between some objective property and neural activity: when I feel pain, this neuron spikes; or, the image I see is “represented” in the firing of visual cortical neurons. The mapping between the objective properties and neural activity is the “code”. How insightful is this metaphor?
An encoded message is understandable to the extent that the reader knows the code. But the problem with applying this metaphor to the brain is only the encoded message is communicated, not the code, and not the original message. Mathematically, original message = encoded message + code, but only one term is communicated. This could still work if there were a universal code that we could assume all neurons can read, the “language of neurons”, or if somehow some information about the code could be gathered from the encoded messages themselves.
Unfortunately, this is in contradiction with the main paradigm in neural coding theory, “efficient coding”.
The efficient coding hypothesis stipulates that neurons encode signals into spike trains in an efficient way, that is, it uses a code such that all redundancy is removed from the original message while preserving information, in the sense that the encoded message can be mapped back to the original message (Barlow, 1961; Simoncelli, 2003). This implies that with a perfectly efficient code, encoded messages are undistinguishable from random. Since the code is determined on the statistics of the inputs and only the encoded messages are communicated, a code is efficient to the extent that it is not understandable by the receiver. This is the paradox of the efficient code.
In the neural coding metaphor, the code is private and specific to each neuron. If we follow this metaphor, this means that all neurons speak a different language, a language that allows expressing concepts very concisely but that no one else can understand. Thus, according to the coding metaphor, the brain is a Tower of Babel.»
🤔2