Ai Events️
5.96K subscribers
950 photos
83 videos
26 files
763 links
This channel aims to cover all events related to artificial intelligence, data science, etc.
Hamid Mahmoodabadi

در این کانال تمام رویدادهای مرتبط با هوش مصنوعی، علوم داده و ... پوشش داده می‌شود.
حمید محمودآبادی

Contact me:
@MahmoodabadiHamid
Download Telegram
LLMs Develop Deeper Understanding of Language

Ask a large language model (LLM) like GPT-4 to smell a rain-soaked campsite, and it'll politely decline. However, when describing the scent, it'll wax poetic about 'an air thick with anticipation' and 'a scent that is both fresh and earthy,' despite having neither prior experience with rain nor a nose to help it make such observations.

LLMs have long been considered to lack understanding of language, as they simply mimic text present in their training data. However, a recent study by researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) suggests that LLMs may develop their own understanding of reality as a way to improve their generative abilities.

The team trained an LLM on a set of small Karel puzzles, which consisted of coming up with instructions to control a robot in a simulated environment. They then used a machine learning technique called 'probing' to look inside the model's thought process as it generates new solutions.

After training on over 1 million random puzzles, the model spontaneously developed its own conception of the underlying simulation, despite never being exposed to this reality during training. This finding calls into question our intuitions about what types of information are necessary for learning linguistic meaning.

@Ai_Events
.
Innovative Tech Aims to Predict Heart Health with Continuous Monitoring

Roeland Decorte, a young Belgian, was inspired to develop technology to diagnose heart conditions after his father's life-threatening heart condition was misdiagnosed. Decorte grew up in a nursing home, where he learned to spot early signs of mental decline in residents.

Decorte founded a company to crack the 'secret rhythm of the heart' using AI and machine learning. He aimed to develop a technology that could continuously monitor the body and detect subtle changes in vital signs, enabling quicker diagnosis and treatment.

Initial attempts to build sensors into clothes and an exoskeleton to measure vitals were unsuccessful due to noise and interference from external factors. Decorte learned a valuable lesson about the importance of precision and accuracy in health monitoring solutions.

The innovative tech has the potential to revolutionize healthcare, especially during the current AI boom, where data is a major bottleneck. Decorte's solution could bridge the gap between data collection and diagnosis, enabling doctors to treat patients more effectively.

@Ai_Events
.
👍3
The artificial intelligence-powered search engine is one of the fastest-growing generative AI apps since ChatGPT, despite controversy over its data-gathering techniques.

Source

@Ai_Events
👍2
MIT Researchers Propose AI-Proof Personhood Credentials

Artificial intelligence agents are becoming increasingly advanced, making it difficult to distinguish between AI-powered users and real humans online. To address this issue, researchers from MIT, OpenAI, Microsoft, and other institutions propose the use of personhood credentials, a verification technique that enables someone to prove they are a real human online while preserving their privacy.

Personhood credentials would allow users to prove they are human without revealing any sensitive information about their identity. To obtain one, users would need to show up in person or have a relationship with the government, such as a tax ID number.

The proposal aims to combat the risks associated with advanced AI capabilities, including the ability to create fake content, algorithmically amplify content, and spread misinformation. If implemented, personhood credentials could help filter out certain content and moderate online interactions.

However, there are risks associated with personhood credentials, including the concentration of power and potential stifling of free expression. To mitigate these risks, the proposal suggests implementing personhood credentials in a way that ensures a variety of issuers and an open protocol for maximum freedom of expression.

@Ai_Events
.
👍1
Iranian Hackers Target Presidential Campaign, Microsoft Reports

For the third presidential election in a row, foreign hacking of the campaigns has begun in earnest. This time, it's the Iranians, not the Russians, making the first significant move. Microsoft released a report stating that a hacking group run by the Iranian intelligence unit had successfully breached the account of a former senior adviser to a presidential campaign.

From that account, the group sent fake email messages, known as 'spear phishing,' to a high-ranking official of a presidential campaign in an effort to break into the campaign's own accounts and databases. While it is unclear what, if anything, the Iranian group was able to achieve, the events of the past few days may well portend a more intense period of foreign interference in the race.

The Iranians have a clear motive to see President Trump defeated, as he withdrew from the 2015 nuclear deal, reimposed economic sanctions on Iran, and ordered the killing of Maj. Gen. Qassim Suleimani, the commander of the Quds Force. The Iranian Revolutionary Guard Corps appears determined to avenge Suleimani's death and has been accused of trying to hire a hit man to assassinate political figures in the US, including Mr. Trump.

The hack and the assassination attempt give the former president an obvious foil, and he is using it to make the case that the Iranians would prefer a continuation of the Biden-Harris administration. Microsoft stopped short of saying that the hacking effort it detected was focused on Mr. Trump's campaign, though the campaign itself said that was the case.

The effort is similar in technique to what Iran attempted when it sought to interfere in the 2020 presidential campaign, but this time it appears to have been more sophisticated, suggesting the hackers learned something from what the Russians accomplished in past campaigns.
Source

@Ai_Events
AI Risk Database Launched to Monitor Post-Deployment Woes

A new database has been created to track the risks associated with artificial intelligence (AI), highlighting the need for ongoing monitoring and mitigation after models are deployed.

The database, compiled by MIT FutureTech, lists 22 potential risks, many of which cannot be checked for ahead of time, according to director Neil Thompson.

Previous attempts to catalog AI risks were limited in scope, but this new database aims to provide a comprehensive and neutral view of the threats, sidestepping the challenge of ranking risks by severity.

Despite its thoroughness, the database may have limited usefulness if it only serves as a list of risks without providing solutions for mitigation.

Researchers intend for the database to be a living document, seeking feedback and further investigation into under-researched areas, with potential solutions to be developed in the future.

@Ai_Events
.
People Are Forming Relationships with AI Systems

Two years after AI was expected to boost productivity, many people are still waiting to see those gains. What's unexpected is that people have started forming relationships with AI systems, treating them as friends, lovers, and mentors.

A researcher from the MIT Media Lab and Harvard Law School argue that we need to prepare for 'addictive intelligence' and regulate AI chatbots to prevent risks. Chatbots with emotive voices are likely to form deep connections with users.

The second most popular use of AI language models is for sexual role-playing, while the most popular use case is creative composition. People also use them for brainstorming, planning, and asking for general information.

While AI chatbots have some limitations, they can be useful for generating ideas and assisting with creative tasks. However, their limitations are becoming increasingly apparent, and investors are starting to lose confidence in the technology.

The hype surrounding AI has set unrealistic expectations, leading to disappointment and disillusionment when the technology fails to deliver on its promises. It may take years for AI to reach its full potential.

@Ai_Events
.
We are actively seeking talented individuals with expertise in scientific data visualization and computing. I'm reaching out to inform you about several new positions within my group at UK Atomic Energy Authority. Below, you will find the details of the available roles.

- Advanced Visualization Scientist: https://careers.ukaea.uk/job/advanced-visualisation-scientist/
- Lead Advanced Visualization Scientist: https://careers.ukaea.uk/job/lead-advanced-visualisation-scientist/

Experience Requirements:

- Scientific or 3D visualization, High Performance Computing

- Common visualization frameworks like VTK, Kitware-Paraview, Omniverse, etc

- Computer graphics, including knowledge of rendering techniques, shading languages, and graphics APIs (e.g., OpenGL, DirectX, Vulkan).

- Python, C++, CUDA (and other GPU Technologies)

- Scientific visualization-relevant data formats and proficient in data conversion (e.g., VTK, VDB, USD, HDF5)

- Open-source projects or published research in relevant fields

Join our team at UKAEA and contribute to the future of fusion energy.


@Ai_Events
:))))))))))

@Ai_Events
😁10🌚2
Trump Shares Fake AI-Generated Images of Taylor Swift Fans Supporting Him

Former President Donald Trump has been caught spreading fake AI-generated images claiming Taylor Swift fans are supporting his campaign. He shared four screenshots on Truth Social, purportedly showing young women wearing 'Swifties for Trump' T-shirts.

However, an analysis by WIRED found that several of the images show 'substantial evidence of manipulation', with some potentially created by an anonymous pro-Trump account with over 300,000 followers.

The so-called 'Swifties for Trump' campaign appears to be a fabrication, with no real evidence of an active initiative. Meanwhile, there is a Swifties4Kamala group, but its cofounder emphasizes that they do not represent all Swifties.

Trump has a history of sharing AI-generated images, including one from an anonymous pro-Trump account claiming the Harris campaign was artificially inflating crowd sizes at her rallies.

Disinformation experts have warned about the threat posed to election integrity by generative AI tools, and this example highlights the issue.

@Ai_Events
.
🗿2👎1😁1🤣1🆒1
Condé Nast Partners with OpenAI to Use Content in ChatGPT and SearchGPT

Condé Nast and OpenAI have struck a multi-year deal, allowing the AI giant to use content from the media giant's roster of properties, including WIRED, on both ChatGPT and SearchGPT.

The deal aims to meet audiences where they are and ensure proper attribution and compensation for the use of intellectual property. Condé Nast CEO Roger Lynch highlighted the ongoing turmoil within the publishing industry and the need for revenue from deals like this to continue investing in journalism and creative endeavors.

Specific terms of the partnership have not been disclosed, but OpenAI declined to comment. The deal has raised concerns among NewsGuild of New York members, who are seeking transparency on how the technology will be used and its potential impact on their work.

Condé Nast joins a growing list of media companies partnering with generative AI companies, including The Atlantic, Axel Springer, and TIME. As major AI companies increasingly gather training data through scraping, publishers face a choice: allowing it and risking the impact on their online visibility or not and risking the loss of their content's discoverability.

This deal has drawn criticism from some, with The Information's CEO Jessica Lessin comparing it to 'settling without litigation' and arguing that publishers are trading their credibility for cash. Condé Nast employees have also expressed concerns, with some questioning the ethics of training AI tools that could spread misinformation.

@Ai_Events
.
AI tea talks Singapore

A Neural Network Approach for Human Visual Learning

Ru-Yuan Zhang
Associate Professor at Shanghai Jiao Tong University

Thu Aug 22th 8 pm Singapore/Beijing time
Thu Aug 22th 3:30 PM Tehran time
Thu Aug 22th 1 pm London time
Thu Aug 22th 8 am New York time


Zoom Link

More information: https://aiteatalksingapore.github.io/

@Ai_Events
Boards Must Improve AI Governance for Responsible Use

According to Carine Smith Ihenacho, Norway's sovereign wealth fund, boards need to be proficient with AI and take control of its application in businesses to mitigate risks. The fund has recommended responsible AI practices to its invested companies, emphasizing the importance of robust governance structures to manage AI-related risks.

The fund has shared its perspective on AI with the boards of its 60 largest portfolio companies, focusing on AI use in the healthcare sector due to its substantial impact on consumers. The fund's adoption of AI governance aligns with rising global concerns about the ethical implications and potential dangers of these technologies.

As companies seek to harness the power of AI while navigating its complexities, the guidance provided by influential investors like Norges Bank Investment Fund may serve as a blueprint for responsible AI implementation and governance in the corporate world.

The fund's emphasis on AI governance is particularly relevant, given that nine of the ten largest positions in its equity holdings are tech companies. This underscores the significant role that technology and AI play in the world today.

@Ai_Events
.
AI Capabilities Growing Faster Than Hardware: Can Decentralization Close the Gap?

AI capabilities have exploded over the past two years, with large language models like ChatGPT, Dall-E, and Midjourney becoming everyday use tools.

The recent McKinsey survey revealed that the number of companies that have adopted generative AI in at least one business function doubled to 65% within a year.

However, training and running AI programs is a resource-intensive endeavor, and big tech seems to have an upper hand, creating the risk of AI centralization.

The World Economic Forum and Epoch AI projections show an accelerating demand for AI compute, with computational power growing at an annual rate of 26-36%.

Microsoft, Google, Alphabet, and Nvidia are investing heavily in AI research and development, leaving smaller companies struggling to access computing power.

Decentralized computing infrastructures like Qubic, a Layer 1 blockchain, offer an alternative, using miners to provide computational power.

This decentralized approach could reduce costs and increase innovation, making it easier for more stakeholders to develop AI solutions.

The challenge of accessing computational power is a hindrance to AI innovation, and decentralization could be the solution to close the gap.

@Ai_Events
.
Primate Labs Launches Geekbench AI Benchmarking Tool

Primate Labs has launched Geekbench AI, a benchmarking tool designed for machine learning and AI-centric workloads. The tool provides a standardized method for measuring and comparing AI capabilities across different platforms and architectures.

Geekbench AI offers three overall scores, reflecting the complexity and heterogeneity of AI workloads, and includes accuracy measurements for each test. The tool supports a wide range of AI frameworks, including OpenVINO, TensorFlow Lite, and more.

The benchmark is integrated with the Geekbench Browser, allowing for easy cross-platform comparisons and result sharing. Primate Labs anticipates regular updates to Geekbench AI to keep pace with market changes and emerging AI features.

Major tech companies like Samsung and Nvidia have already begun utilizing the benchmark, and Primate Labs believes that Geekbench AI has reached a level of reliability suitable for integration into professional workflows.

@Ai_Events
.
AI Revolutionizes the World of Gaming

Artificial Intelligence (AI) is transforming the gaming industry in various ways, from enhancing NPCs to adjusting game difficulty and content. With AI, NPCs can now behave more humanly, react to their environment, and respond differently to player choices.

Online casinos are also utilizing AI to better understand player preferences and detect unusual behavior, preventing fraud. Social casinos recommend games to players using AI, creating a personalized experience.

AI is being used in console games to create personalized storylines and quests based on player actions, as well as adaptive difficulty levels. AI also improves game visuals with technologies like AI upscaling and ray tracing, making games more realistic.

Overall, AI is transforming the gaming industry, providing more realistic and personalized experiences for players.

@Ai_Events
.
AI Growth Outpacing Security Measures

A recent survey by PSA Certified has revealed that the rapid growth of AI is outstripping the industry's ability to safeguard products, devices, and services. Two-thirds of 1,260 global technology decision-makers are concerned that the speed of AI advancements is leaving security measures behind.

The survey highlights the need for a holistic approach to security, embedded throughout the entire AI lifecycle, from device deployment to the management of AI models operating at the edge. This proactive approach is deemed essential to building consumer trust and mitigating escalating security risks.

While AI is a huge opportunity, its proliferation also offers the same opportunity to bad actors. Only half of respondents believe their current security investments are sufficient, and essential security practices, such as independent certifications and threat modelling, are being neglected by a substantial portion of respondents.

Industry leaders emphasize the importance of prioritizing security investment and taking a collective responsibility to ensure consumer trust in AI-driven services is maintained. A majority of decision-makers believe their organizations are equipped to handle the potential security risks associated with AI's surge.

@Ai_Events
.
The AI Revolution: Reshaping Data Centres and the Digital Landscape

Artificial intelligence (AI) is changing the world, with a projected global market value of $2-4 trillion USD by 2030. The future is now, and AI has crept into every facet of our lives, transforming work and play.

AI is simulated human intelligence processes, including learning, reasoning, and self-correction. The surge of AI is staggering, with examples like ChatGPT reaching a million users in just five days.

However, AI has a large appetite for data, requiring enormous computational power for processing. Data centres are the backbones of the digital world, evolving into entire ecosystems to facilitate the flow of information.

Data centres need efficient delivery of data worldwide, requiring power, connectivity, and cooling systems. As AI demands grow, so does the need for compatibility with data centre infrastructure.

Integrating AI presents challenges, including power, connectivity, and cooling. AI is ever-emerging, and regulatory changes must be made, such as the EU's AI Act and NIS2 Directive.

@Ai_Events
.
🙏1
xAI Unveils Grok-2 to Challenge AI Hierarchy

xAI has announced the release of Grok-2, a major upgrade that boasts improved capabilities in chat, coding, and reasoning. The upgrade includes a smaller but capable version called Grok-2 mini, which will be made available through xAI's enterprise API later this month.

Grok-2 has shown significant improvements in reasoning with retrieved content and in its tool use capabilities, such as correctly identifying missing information, reasoning through sequences of events, and discarding irrelevant posts.

The new Grok interface on X features a redesigned interface and new features. Premium and Premium+ subscribers will have access to both Grok-2 and Grok-2 mini.

xAI is also collaborating with Black Forest Labs to experiment with their FLUX.1 model to expand Grok's capabilities on X. The company plans to roll out multimodal understanding as a core part of the Grok experience on both X and the API.

While the release of Grok-2 marks a significant milestone for xAI, it's clear that the AI landscape remains highly competitive, with ChatGPT-4o and Google's Gemini 1.5 leading the pack.

@Ai_Events
.
EU Takes Action Against X's Use of EU User Data for AI Chatbot Training

The European Union has taken action against social media platform X, ordering the company to suspend the use of all data belonging to EU citizens for training its AI systems. This decision follows a complaint from the Irish Data Protection Commission (DPC), which has been monitoring X's data processing activities.

The DPC sought an order to restrain or suspend X's data processing activities on users for the development, training, and refinement of its AI system. This move marks a growing conflict between AI advances and ongoing data protection concerns in the EU.

X has agreed to pause the use of certain EU user data for AI chatbot training, citing concerns that the DPC's order would undermine its efforts to keep the platform safe and restrict its use of technologies in the EU. The company claims to have been fully transparent about the use of public data for AI models, including providing necessary legal assessments and engaging in lengthy discussions with regulators.

The regulatory action against X is not an isolated incident. Other tech giants, such as Meta Platforms and Google, have also faced similar scrutiny in recent months. Regulators are taking a more active role in overseeing how tech companies utilise user data for AI training and development, reflecting growing concerns about data privacy and the ethical implications of AI advancement.

The outcome of this case could set important precedents for how AI development is regulated in the EU, potentially influencing global standards for data protection in the AI era. The tech industry and privacy advocates alike will be watching closely as this situation develops, recognising its potential to shape the future of AI innovation and data privacy regulations.

@Ai_Events
.
👍1