SPE Disciplines
Theme
Author
Concept Tag
Country
Industry
Technology
File Type
Layer | Fill | Outline |
---|
Theme | Visible | Selectable | Appearance | Zoom Range (now: 0) |
---|
Fill | Stroke |
---|---|
What is Monte Carlo Integration? Monte Carlo, is in fact, the name of the world-famous casino located in the eponymous district of the principality of Monaco on the world-famous French Riviera. It turns out that the casino inspired the minds of famous scientists to devise an intriguing mathematical technique for solving complex problems in statistics, numerical computing, and system simulation. One of the first and most famous uses of this technique was during the Manhattan Project when the chain-reaction dynamics in highly enriched uranium presented an unimaginably complex theoretical calculation to the scientists. Even the genius minds such as John Von Neumann, Stanislaw Ulam, and Nicholas Metropolis could not tackle it in the traditional way.
To the chagrin of absolutely no one, 2020 is finally drawing to a close. It has been a rollercoaster of a year, one defined almost exclusively by the COVID-19 pandemic. But other things have happened, including in the fields of artificial intelligence (AI), data science, and machine learning. To that end, it is time for KDnuggets annual year-end expert analysis and predictions. This year, we posed the question: What were the main developments in AI, data science, machine learning research in 2020, and what key trends do you see for 2021?
Creating agents that resemble the cognitive abilities of the human brain has been one of the most elusive goals of the artificial intelligence (AI) space. Recently, I’ve been spending time on a couple of scenarios that relate to imagination in deep learning systems that reminded me of a very influential paper Alphabet’s subsidiary DeepMind published last year in this subject. Imagination is one of those magical features of the human mind that differentiate us from other species. From the neuroscience standpoint, imagination is the ability of the brain to form images or sensations without any immediate sensorial input. Imagination is a key element of our learning process as it enable us to apply knowledge to specific problems and better plan for future outcomes.
Creating agents that resemble the cognitive abilities of the human brain has been one of the most elusive goals of the artificial intelligence (AI) space. Recently, I’ve been spending time on a couple of scenarios that relate to imagination in deep learning systems that reminded me of a very influential paper Alphabet’s subsidiary DeepMind published last year in this subject. Imagination is one of those magical features of the human mind that differentiate us from other species. From the neuroscience standpoint, imagination is the ability of the brain to form images or sensations without any immediate sensorial input. Imagination is a key element of our learning process as it enable us to apply knowledge to specific problems and better plan for future outcomes.
Have you ever heard about the Jennifer Aniston neuron? In 2005, a group of neuroscientists led by Rodrigo Quian Quiroga published a paper detailing his discovery of a type of neuron that steadily fired whenever the person was shown a photo of Jennifer Aniston. The neuron in question was not activated when presented with photos of other celebrities. Obviously, we don’t all have Jennifer Aniston neurons and those specialized neurons can be activated in response to pictures of other celebrities. However, the Aniston neuron has become one of the most powerful metaphors in neuroscience to describe neurons that focus on a very specific task.
Have you ever heard about the Jennifer Aniston neuron? In 2005, a group of neuroscientists led by Rodrigo Quian Quiroga published a paper detailing his discovery of a type of neuron that steadily fired whenever the person was shown a photo of Jennifer Aniston. The neuron in question was not activated when presented with photos of other celebrities. Obviously, we don’t all have Jennifer Aniston neurons and those specialized neurons can be activated in response to pictures of other celebrities. The fascinating thing about the Jennifer Aniston neuron is that it was discovered while Quiroga was researching areas of the brain that caused epileptic seizures.
In every field, we get specialized roles in the early days, replaced by the commonplace role over time. It seems like this is another case of just that. Machine learning engineer as a role is a consequence of the massive hype fueling buzzwords such as "artificial intelligfence" and "data science" in the enterprise. In the early days of machine learning, it was a very necessary role. And it commanded a nice little pay bump for many.
In every field, we get specialized roles in the early days, replaced by the commonplace role over time. It seems like this is another case of just that. Machine learning engineer as a role is a consequence of the massive hype fueling buzzwords such as "artificial intelligfence" and "data science" in the enterprise. In the early days of machine learning, it was a very necessary role. And it commanded a nice little pay bump for many.
Data transformation is one of the fundamental steps in data processing. What does Feature Scaling mean? In practice, different types of variables are often encountered in the same data set. A significant issue is that the range of the variables may differ a lot. Using the original scale may put more weight on the variables with a large range.
Data transformation is one of the fundamental steps in data processing. What does Feature Scaling mean? In practice, different types of variables are often encountered in the same data set. A significant issue is that the range of the variables may differ a lot. Using the original scale may put more weight on the variables with a large range.