What Project Management Tools to Use for Data Science Projects
🔗 What Project Management Tools to Use for Data Science Projects
Traditional project management methodologies do not work as stand-alone approaches in data science. Knowing the strengths of each for…
🔗 What Project Management Tools to Use for Data Science Projects
Traditional project management methodologies do not work as stand-alone approaches in data science. Knowing the strengths of each for…
Towards Data Science
What Project Management Tools to Use for Data Science Projects
Traditional project management methodologies do not work as stand-alone approaches in data science. Knowing the strengths of each for…
MONet: Unsupervised Scene Decomposition and Representation
🔗 MONet: Unsupervised Scene Decomposition and Representation
The ability to decompose scenes in terms of abstract building blocks is crucial for general intelligence. Where those basic building blocks share meaningful properties, interactions and other regularities across scenes, such decompositions can simplify reasoning and facilitate imagination of novel scenarios. In particular, representing perceptual observations in terms of entities should improve data efficiency and transfer performance on a wide range of tasks. Thus we need models capable of discovering useful decompositions of scenes by identifying units with such regularities and representing them in a common format. To address this problem, we have developed the Multi-Object Network (MONet). In this model, a VAE is trained end-to-end together with a recurrent attention network -- in a purely unsupervised manner -- to provide attention masks around, and reconstructions of, regions of images. We show that this model is capable of learning to decompose and represent challenging 3D scenes into semantically mean
🔗 MONet: Unsupervised Scene Decomposition and Representation
The ability to decompose scenes in terms of abstract building blocks is crucial for general intelligence. Where those basic building blocks share meaningful properties, interactions and other regularities across scenes, such decompositions can simplify reasoning and facilitate imagination of novel scenarios. In particular, representing perceptual observations in terms of entities should improve data efficiency and transfer performance on a wide range of tasks. Thus we need models capable of discovering useful decompositions of scenes by identifying units with such regularities and representing them in a common format. To address this problem, we have developed the Multi-Object Network (MONet). In this model, a VAE is trained end-to-end together with a recurrent attention network -- in a purely unsupervised manner -- to provide attention masks around, and reconstructions of, regions of images. We show that this model is capable of learning to decompose and represent challenging 3D scenes into semantically mean
arXiv.org
MONet: Unsupervised Scene Decomposition and Representation
The ability to decompose scenes in terms of abstract building blocks is crucial for general intelligence. Where those basic building blocks share meaningful properties, interactions and other...
Evolution of Traditional Statistical Tests in the Age of Data
🔗 Evolution of Traditional Statistical Tests in the Age of Data
The difference between significance testing in it’s more research based/academic origins and it’s evolution in more dynamic application…
🔗 Evolution of Traditional Statistical Tests in the Age of Data
The difference between significance testing in it’s more research based/academic origins and it’s evolution in more dynamic application…
Towards Data Science
Evolution of Traditional Statistical Tests in the Age of Data
The difference between significance testing in it’s more research based/academic origins and it’s evolution in more dynamic application…
Segmenting Credit Card Customers with Machine Learning
🔗 Segmenting Credit Card Customers with Machine Learning
Identifying marketable segments with unsupervised machine learning
🔗 Segmenting Credit Card Customers with Machine Learning
Identifying marketable segments with unsupervised machine learning
Towards Data Science
Segmenting Credit Card Customers with Machine Learning
Identifying marketable segments with unsupervised machine learning
Principal Component Analysis for Dimensionality Reduction
🔗 Principal Component Analysis for Dimensionality Reduction
Learn how to perform PCA by learning the mathematics behind the algorithm and executing it step-by-step with Python!
🔗 Principal Component Analysis for Dimensionality Reduction
Learn how to perform PCA by learning the mathematics behind the algorithm and executing it step-by-step with Python!
Towards Data Science
Principal Component Analysis for Dimensionality Reduction
Learn how to perform PCA by learning the mathematics behind the algorithm and executing it step-by-step with Python!
Intelligent computing in Snowflake
🔗 Intelligent computing in Snowflake
In a little over a week, I’m heading over to Snowflake’s inaugural user summit in San Francisco, where I’ll be speaking on data sharing in…
🔗 Intelligent computing in Snowflake
In a little over a week, I’m heading over to Snowflake’s inaugural user summit in San Francisco, where I’ll be speaking on data sharing in…
Towards Data Science
Intelligent computing in Snowflake
In a little over a week, I’m heading over to Snowflake’s inaugural user summit in San Francisco, where I’ll be speaking on data sharing in…
How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits
🔗 How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits
We significantly reduce the cost of factoring integers and computing discrete logarithms over finite fields on a quantum computer by combining techniques from Griffiths-Niu 1996, Zalka 2006, Fowler 2012, Ekerå-Håstad 2017, Ekerå 2017, Ekerå 2018, Gidney-Fowler 2019, Gidney 2019. We estimate the approximate cost of our construction using plausible physical assumptions for large-scale superconducting qubit platforms: a planar grid of qubits with nearest-neighbor connectivity, a characteristic physical gate error rate of $10^{-3}$, a surface code cycle time of 1 microsecond, and a reaction time of 10 micro-seconds. We account for factors that are normally ignored such as noise, the need to make repeated attempts, and the spacetime layout of the computation. When factoring 2048 bit RSA integers, our construction's spacetime volume is a hundredfold less than comparable estimates from earlier works (Fowler et al. 2012, Gheorghiu et al. 2019). In the abstract circuit model (which ig
🔗 How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits
We significantly reduce the cost of factoring integers and computing discrete logarithms over finite fields on a quantum computer by combining techniques from Griffiths-Niu 1996, Zalka 2006, Fowler 2012, Ekerå-Håstad 2017, Ekerå 2017, Ekerå 2018, Gidney-Fowler 2019, Gidney 2019. We estimate the approximate cost of our construction using plausible physical assumptions for large-scale superconducting qubit platforms: a planar grid of qubits with nearest-neighbor connectivity, a characteristic physical gate error rate of $10^{-3}$, a surface code cycle time of 1 microsecond, and a reaction time of 10 micro-seconds. We account for factors that are normally ignored such as noise, the need to make repeated attempts, and the spacetime layout of the computation. When factoring 2048 bit RSA integers, our construction's spacetime volume is a hundredfold less than comparable estimates from earlier works (Fowler et al. 2012, Gheorghiu et al. 2019). In the abstract circuit model (which ig
arXiv.org
How to factor 2048 bit RSA integers in 8 hours using 20 million...
We significantly reduce the cost of factoring integers and computing discrete logarithms in finite fields on a quantum computer by combining techniques from Shor 1994, Griffiths-Niu 1996, Zalka...
Google researchers developed a way to peer inside the minds of deep-learning systems, and the results are delightfully weird.
https://www.technologyreview.com/f/610439/making-sense-of-neural-networks-febrile-dreams/
🔗 A new tool helps us understand what an AI is actually thinking
Google researchers developed a way to peer inside the minds of deep-learning systems, and the results are delightfully weird.What they did: The team built a tool that combines several techniques to provide people with a clearer idea of how neural networks make decisions.
https://www.technologyreview.com/f/610439/making-sense-of-neural-networks-febrile-dreams/
🔗 A new tool helps us understand what an AI is actually thinking
Google researchers developed a way to peer inside the minds of deep-learning systems, and the results are delightfully weird.What they did: The team built a tool that combines several techniques to provide people with a clearer idea of how neural networks make decisions.
MIT Technology Review
A new tool helps us understand what an AI is actually thinking
Google researchers developed a way to peer inside the minds of deep-learning systems, and the results are delightfully weird.What they did: The team built a tool that combines several techniques to provide people with a clearer idea of how neural networks…
KakaoBrain/torchgpipe
🔗 KakaoBrain/torchgpipe
A GPipe implementation in PyTorch. Contribute to KakaoBrain/torchgpipe development by creating an account on GitHub.
🔗 KakaoBrain/torchgpipe
A GPipe implementation in PyTorch. Contribute to KakaoBrain/torchgpipe development by creating an account on GitHub.
GitHub
GitHub - kakaobrain/torchgpipe: A GPipe implementation in PyTorch
A GPipe implementation in PyTorch. Contribute to kakaobrain/torchgpipe development by creating an account on GitHub.
Augmenting correlation structures in spatial data using deep generative models
https://arxiv.org/abs/1905.09796
🔗 Augmenting correlation structures in spatial data using deep generative models
State-of-the-art deep learning methods have shown a remarkable capacity to model complex data domains, but struggle with geospatial data. In this paper, we introduce SpaceGAN, a novel generative model for geospatial domains that learns neighbourhood structures through spatial conditioning. We propose to enhance spatial representation beyond mere spatial coordinates, by conditioning each data point on feature vectors of its spatial neighbours, thus allowing for a more flexible representation of the spatial structure. To overcome issues of training convergence, we employ a metric capturing the loss in local spatial autocorrelation between real and generated data as stopping criterion for SpaceGAN parametrization. This way, we ensure that the generator produces synthetic samples faithful to the spatial patterns observed in the input. SpaceGAN is successfully applied for data augmentation and outperforms compared to other methods of synthetic spatial data generation. Finally, we propose an ensemble learning frame
https://arxiv.org/abs/1905.09796
🔗 Augmenting correlation structures in spatial data using deep generative models
State-of-the-art deep learning methods have shown a remarkable capacity to model complex data domains, but struggle with geospatial data. In this paper, we introduce SpaceGAN, a novel generative model for geospatial domains that learns neighbourhood structures through spatial conditioning. We propose to enhance spatial representation beyond mere spatial coordinates, by conditioning each data point on feature vectors of its spatial neighbours, thus allowing for a more flexible representation of the spatial structure. To overcome issues of training convergence, we employ a metric capturing the loss in local spatial autocorrelation between real and generated data as stopping criterion for SpaceGAN parametrization. This way, we ensure that the generator produces synthetic samples faithful to the spatial patterns observed in the input. SpaceGAN is successfully applied for data augmentation and outperforms compared to other methods of synthetic spatial data generation. Finally, we propose an ensemble learning frame
arXiv.org
Augmenting correlation structures in spatial data using deep...
State-of-the-art deep learning methods have shown a remarkable capacity to model complex data domains, but struggle with geospatial data. In this paper, we introduce SpaceGAN, a novel generative...
Основы статистики
Наш телеграм канал - tglink.me/ai_machinelearning_big_data
00 - Основы статистики. О курсе
01 - Основы статистики. Введение
02 - Основы статистики. Сравнение средних
03 - Основы статистики. Корреляция и регрессия
04 - Основы статистики. Анализ номинативных данных
05 - Основы статистики. Логистическая регрессия и непараметрические методы
06 - Основы статистики. Кластерный анализ и метод главных компонент
07 - Основы статистики. Подробнее о линейной регрессии
08 - Основы статистики. Смешанные регрессионные модели
09 - Основы статистики. Введение в bootstrap
🎥 00 - Основы статистики. О курсе
👁 737 раз ⏳ 76 сек.
🎥 01 - Основы статистики. Введение
👁 1349 раз ⏳ 3847 сек.
🎥 02 - Основы статистики. Сравнение средних
👁 359 раз ⏳ 4638 сек.
🎥 03 - Основы статистики. Корреляция и регрессия
👁 281 раз ⏳ 6792 сек.
🎥 04 - Основы статистики. Анализ номинативных данных
👁 231 раз ⏳ 7503 сек.
🎥 05 - Основы статистики. Логистическая регрессия и непараметрические методы
👁 163 раз ⏳ 8859 сек.
🎥 06 - Основы статистики. Кластерный анализ и метод главных компонент
👁 173 раз ⏳ 5970 сек.
🎥 07 - Основы статистики. Подробнее о линейной регрессии
👁 164 раз ⏳ 8245 сек.
🎥 08 - Основы статистики. Смешанные регрессионные модели
👁 216 раз ⏳ 3165 сек.
🎥 09 - Основы статистики. Введение в bootstrap
👁 143 раз ⏳ 3923 сек.
Наш телеграм канал - tglink.me/ai_machinelearning_big_data
00 - Основы статистики. О курсе
01 - Основы статистики. Введение
02 - Основы статистики. Сравнение средних
03 - Основы статистики. Корреляция и регрессия
04 - Основы статистики. Анализ номинативных данных
05 - Основы статистики. Логистическая регрессия и непараметрические методы
06 - Основы статистики. Кластерный анализ и метод главных компонент
07 - Основы статистики. Подробнее о линейной регрессии
08 - Основы статистики. Смешанные регрессионные модели
09 - Основы статистики. Введение в bootstrap
🎥 00 - Основы статистики. О курсе
👁 737 раз ⏳ 76 сек.
Лектор: Анатолий Карпов
https://stepik.org/76
🎥 01 - Основы статистики. Введение
👁 1349 раз ⏳ 3847 сек.
Лектор: Анатолий Карпов
1. 0:00 Общая информация о курсе
2. 1:32 Генеральная совокупность и выборка
2.1 1:32 Понятие генеральной совокупности и вы...
🎥 02 - Основы статистики. Сравнение средних
👁 359 раз ⏳ 4638 сек.
Лектор: Анатолий Карпов
1. T-распределение
2. Сравнение двух средних; t-критерий Стьюдента
3. Проверка распределения на нормальность, QQ-Plot
4. О...
🎥 03 - Основы статистики. Корреляция и регрессия
👁 281 раз ⏳ 6792 сек.
Лектор: Анатолий Карпов
1. Понятие корреляции
2. Условия применения коэффициента корреляции
3. Регрессия с одной независимой переменной
4. Гипотез...
🎥 04 - Основы статистики. Анализ номинативных данных
👁 231 раз ⏳ 7503 сек.
Лектор: Анатолий Карпов
1. Постановка задачи
2. Расстояние Пирсона
3. Распределение Хи-квадрат Пирсона
4. Расчет p-уровня значимости
5. Анализ таб...
🎥 05 - Основы статистики. Логистическая регрессия и непараметрические методы
👁 163 раз ⏳ 8859 сек.
Лектор: Анатолий Карпов
1. Логистическая регрессия. Постановка задачи.
2. Модель без предикторов. Intercept only model
3. Модель с одним номинатив...
🎥 06 - Основы статистики. Кластерный анализ и метод главных компонент
👁 173 раз ⏳ 5970 сек.
Лектор: Анатолий Карпов
1. Кластерный анализ методом k - средних
2. Может ли кластерный анализ "ошибаться"?
3. Как определить оптимальное число кл...
🎥 07 - Основы статистики. Подробнее о линейной регрессии
👁 164 раз ⏳ 8245 сек.
Лектор: Анатолий Карпов
1. Введение
2. Линейность взаимосвязи
3. Логарифмическая трансформация переменных
4. Проблема гетероскедастичности
5. Муль...
🎥 08 - Основы статистики. Смешанные регрессионные модели
👁 216 раз ⏳ 3165 сек.
Лектор: Иван Иванчей
1. Введение
2. Нарушение допущения о независимости наблюдений
3. Смешанные регрессионные модели. Реализация в R
4. Статистиче...
🎥 09 - Основы статистики. Введение в bootstrap
👁 143 раз ⏳ 3923 сек.
Лектор: Арсений Москвичев
1. Складной нож (jackknife)
2. Bootstrap
https://stepik.org/2152
Vk
00 - Основы статистики. О курсе
Лектор: Анатолий Карпов https://stepik.org/76
Перенос стиля это процесс преобразования стиля исходного к стилю выбранного изображения и опирается на Сверточный тип сети (CNN), при этом заранее обученной, поэтому многое будет зависеть от выбора данной обученной сети. Благо такие сети есть и выбирать есть из чего, но здесь будет применяться VGG-16.
Для начала необходимо подключить необходимые библиотеки
https://habr.com/ru/post/453512/
🔗 Перенос стиля
Перенос стиля это процесс преобразования стиля исходного к стилю выбранного изображения и опирается на Сверточный тип сети (CNN), при этом заранее обученной, поэ...
Для начала необходимо подключить необходимые библиотеки
https://habr.com/ru/post/453512/
🔗 Перенос стиля
Перенос стиля это процесс преобразования стиля исходного к стилю выбранного изображения и опирается на Сверточный тип сети (CNN), при этом заранее обученной, поэ...
Хабр
Перенос стиля
Перенос стиля это процесс преобразования стиля исходного к стилю выбранного изображения и опирается на Сверточный тип сети (CNN), при этом заранее обученной, поэ...
🎥 CPE-DA #2 «Художественный потенциал нейронных сетей»
👁 1 раз ⏳ 2301 сек.
👁 1 раз ⏳ 2301 сек.
Лекция Федора Червинского
Инженер-исследователь Samsung AI Center Moscow
Vk
CPE-DA #2 «Художественный потенциал нейронных сетей»
Лекция Федора Червинского
Инженер-исследователь Samsung AI Center Moscow
Инженер-исследователь Samsung AI Center Moscow
🎥 Implementing K-Means Clustering From Scratch: Simply Explained
👁 1 раз ⏳ 1201 сек.
👁 1 раз ⏳ 1201 сек.
This video explains how the K-Means Clustering algorithm works, an implementation from scratch, using popular machine learning libraries to run K-Means Clustering, and the plethora of applications that unsupervised machine learning algorithms such as K-Means Clustering bring.
_______
Website: https://www.discoverai.org/
Presentation: https://docs.google.com/presentation/d/1T_towpsxCC31tWytFMWxoJC91FUcm64VUhrY4W77bFk/edit?usp=sharing
Follow us on Twitter: https://twitter.com/_DiscoverAI_
Vk
Implementing K-Means Clustering From Scratch: Simply Explained
This video explains how the K-Means Clustering algorithm works, an implementation from scratch, using popular machine learning libraries to run K-Means Clustering, and the plethora of applications that unsupervised machine learning algorithms such as K-Means…
10 Python image manipulation tools.
An overview of some of the commonly used Python libraries that provide an easy and intuitive way to transform images.
https://towardsdatascience.com/image-manipulation-tools-for-python-6eb0908ed61f
🔗 10 Python image manipulation tools
An overview of some of the commonly used Python libraries that provide an easy and intuitive way to transform images.
An overview of some of the commonly used Python libraries that provide an easy and intuitive way to transform images.
https://towardsdatascience.com/image-manipulation-tools-for-python-6eb0908ed61f
🔗 10 Python image manipulation tools
An overview of some of the commonly used Python libraries that provide an easy and intuitive way to transform images.
Medium
10 Python image manipulation tools.
An overview of some of the commonly used Python libraries that provide an easy and intuitive way to transform images.
AI investment activity - trends of 2018
🔗 AI investment activity - trends of 2018
AI hype slowdown, building cognitive tech stack, vertical integration and other observations
🔗 AI investment activity - trends of 2018
AI hype slowdown, building cognitive tech stack, vertical integration and other observations
Towards Data Science
AI investment activity - trends of 2018
AI hype slowdown, building cognitive tech stack, vertical integration and other observations
🎥 Machine Learning Part 19: Time Series And AutoRegressive Integrated Moving Average Model (ARIMA)
👁 1 раз ⏳ 1250 сек.
👁 1 раз ⏳ 1250 сек.
In this video, we cover AutoRegressive Integrated Moving Average Model (ARIMA), Auto Correlation Function (ACF) and Partial Auto Correlation Function (PACF).
CONNECT
Site: https://coryjmaklin.com/
Medium: https://medium.com/@corymaklin
GitHub: https://github.com/corymaklin
Twitter: https://twitter.com/CoryMaklin
Linkedin: https://www.linkedin.com/in/cory-makl...
Facebook: https://www.facebook.com/cory.maklin
Patreon: https://www.patreon.com/corymaklin
Vk
Machine Learning Part 19: Time Series And AutoRegressive Integrated Moving Average Model (ARIMA)
In this video, we cover AutoRegressive Integrated Moving Average Model (ARIMA), Auto Correlation Function (ACF) and Partial Auto Correlation Function (PACF).
CONNECT
Site: https://coryjmaklin.com/
Medium: https://medium.com/@corymaklin
GitHub: https://g…
CONNECT
Site: https://coryjmaklin.com/
Medium: https://medium.com/@corymaklin
GitHub: https://g…