I have an opening for a 4y PhD position in my ERC Consolidator project DREAM (“distributed dynamic representations for dialogue management”) at #ILLC in Amsterdam. Deadline 25 Feb 2019. More details: http://www.illc.uva.nl/NewsandEvents/News/Positions/newsitem/10538/
Please spread the word! #NLProc AmsterdamNLP
❇️ @AI_Python
🗣 @AI_Python_arXiv
✴️ @AI_Python_EN
Please spread the word! #NLProc AmsterdamNLP
❇️ @AI_Python
🗣 @AI_Python_arXiv
✴️ @AI_Python_EN
We've opened another position at AYLIEN: complete a PhD or MSc while working on our Research team! Get in touch fast with a CV and a cover letter outlining a research proposal (open to worldwide students willing to relocate to Dublin) #NLProc #deeplearning
🌎 Link Review
✴️ @AI_Python_EN
❇️ @AI_Python
🗣 @AI_Python_arXiv.
🌎 Link Review
✴️ @AI_Python_EN
❇️ @AI_Python
🗣 @AI_Python_arXiv.
Google Doubles Down On Spammers With #TensorFlow. #BigData #Analytics #MachineLearning #DataScience #AI #NLProc #IoT #IIoT #PyTorch #Python #RStats #Java #JavaScript #ReactJS #GoLang #CloudComputing #Serverless #DataScientist #Linux #TransferLearning
🌎 http://bit.ly/2U4ZSAf
✴️ @AI_Python_EN
❇️ @AI_Python
🗣 @AI_Python_arXiv
🌎 http://bit.ly/2U4ZSAf
✴️ @AI_Python_EN
❇️ @AI_Python
🗣 @AI_Python_arXiv
Comprehensive Collection of #DataScience and #MachineLearning Resources for #DataScientists includes “Great Articles on Natural Language Processing” +much more 👉https://bit.ly/2nvMXIx #abdsc #BigData #AI #DeepLearning #Databases #Coding #Python #Rstats #NeuralNetworks #NLProc
✴️ @AI_Python_EN
✴️ @AI_Python_EN
How This Researcher Is Using #DeepLearning To Shut Down Trolls And Fake Reviews. #BigData #Analytics #DataScience #AI #MachineLearning #NLProc #IoT #IIoT #PyTorch #Python #RStats #JavaScript #ReactJS #GoLang #Serverless #DataScientist #Linux
🌎 https://bit.ly/2U2J5BX
✴️ @AI_Python_EN
🌎 https://bit.ly/2U2J5BX
✴️ @AI_Python_EN
CS224N Natural Language Processing with Deep Learning 2019
YouTube playlist:
https://www.youtube.com/playlist?list=PLoROMvodv4rOhcuXMZkNm7j3fVwBBY42z
http://onlinehub.stanford.edu/cs224 #NLProc
✴️ @AI_Python_EN
YouTube playlist:
https://www.youtube.com/playlist?list=PLoROMvodv4rOhcuXMZkNm7j3fVwBBY42z
http://onlinehub.stanford.edu/cs224 #NLProc
✴️ @AI_Python_EN
A collection of research papers on decision trees, classification trees, and regression trees with implementations:
https://github.com/benedekrozemberczki/awesome-decision-tree-papers
#BigData #MachineLearning #AI #DataScience #Algorithms #NLProc #Coding #DataScientists
✴️ @AI_Python_EN
https://github.com/benedekrozemberczki/awesome-decision-tree-papers
#BigData #MachineLearning #AI #DataScience #Algorithms #NLProc #Coding #DataScientists
✴️ @AI_Python_EN
"Fair is Better than Sensational:Man is to Doctor as Woman is to Doctor"
Do word embeddings really say that man is to doctor as woman is to nurse? Apparently not!
Nissim et al.: https://arxiv.org/abs/1905.09866
#ArtificialIntelligence #MachineLearning #NLProc #bias
✴️ @AI_Python_EN
Do word embeddings really say that man is to doctor as woman is to nurse? Apparently not!
Nissim et al.: https://arxiv.org/abs/1905.09866
#ArtificialIntelligence #MachineLearning #NLProc #bias
✴️ @AI_Python_EN
We have just released Multi-SimLex v1: a new multilingual #NLProc resource for semantic similarity. It covers 1,888 concept pairs across 12 typologically diverse langs, plus 66 xling data sets. .
https://multisimlex.com
Multi-SimLex provides a new, typologically diverse evaluation benchmark for representation learning models. See our paper for experiments and interesting analysis:
https://arxiv.org/pdf/2003.04866.pdf
But this is not all! We are also launching a collaborative initiative to extend Multi-SimLex to cover many more of the world’s languages! Please join us in this effort to create an extensive semantic similarity resource for the needs of contemporary multilingual #NLProc.We welcome your contributions for both small and major languages! Follow the guidelines at https://multisimlex.com to create and submit a Multi-Simlex -style dataset for your favourite language. All the
contributions will be shared with everyone via the Multi-SimLex site.
https://multisimlex.com
Multi-SimLex provides a new, typologically diverse evaluation benchmark for representation learning models. See our paper for experiments and interesting analysis:
https://arxiv.org/pdf/2003.04866.pdf
But this is not all! We are also launching a collaborative initiative to extend Multi-SimLex to cover many more of the world’s languages! Please join us in this effort to create an extensive semantic similarity resource for the needs of contemporary multilingual #NLProc.We welcome your contributions for both small and major languages! Follow the guidelines at https://multisimlex.com to create and submit a Multi-Simlex -style dataset for your favourite language. All the
contributions will be shared with everyone via the Multi-SimLex site.