0/13
Mochi
/
AI concepts
Sort
0/13
Some Machine Learning methods are called an "ensemble".
What does that mean and what is the motivation behind an "ensemble"?
Intuitive explanation:
Instead of just asking one learning algorithm for the right answer, one asks a panel of different learning algorithms and the panel then votes on the final answer.
The motivation behind an ensemble is the idea that diversity in the models leads to a more robust answer of the ensemble compared to any individual model.

Details

In the context of Machine Learning, ensemble methods refer to the use of multiple different models for the same task and subsequent combination of their outputs.
These different models can be of the same or a different base learner.
The word "ensemble" has been adopted from the French and means "together" or "group".
The result of all individual learners is then combined into an overall result, e.g. via simple majority vote.
Ensembles can be used in a supervised as well as in unsupervised settings.
Examples of ensemble types:
  • "Bootstrap aggregating" ("bagging")
  • "Boosting"
  • "Stacking"
  • ...
Next Side
SPACE
Forgot
Remembered
In the context of AI, what is a "transformer"?
(in just a few sentences; without going into details)

basics conceptual

A transformer is a neural network architecture. Transformers are based on the so-called attention mechanism.
Transformers were proposed in a paper called "Attention Is All You Need" that was published in 2017 by a team of AI researchers from Google.
In the years following the publication, transformers started to dominate over and replace previous architectures, which had been recurrent neural networks (RNNs), including long short-term memory (LSTM) and GRU architectures.
Transformers became widely used in many application domains, like computer vision and natural language processing (NLP).
What is XGBoost and how does it work?
basics conceptual

XGBoost stands for "Extreme Gradient Boosting". XGBoost is a scalable, distributed gradient-boosted decision tree (GBDT) machine learning library.
It provides parallel tree boosting and is the leading machine learning library for regression, classification, and ranking problems.
XGBoost is not based on neural networks but decision trees. XGBoost dominated Kaggle competitions in recent years.
To understand how XGBoost, you need to understand the following concepts:
  • supervised learning
  • decision trees
  • ensemble learning, and
  • gradient boosting
Gradient boosting
The term "boosting" in "gradient boosting" refers to the concept improving a single weak model by combining it with a number of other weak models in order to generate a collectively strong model.
Gradient boosting is an extension of boosting where the process of additively generating weak models is formalized as a gradient descent algorithm over an objective function.
Gradient boosting sets targeted outcomes for the next model in an effort to minimize errors. Targeted outcomes for each case are based on the gradient of the error (hence the name gradient boosting) with respect to the prediction.
GBDTs iteratively train an ensemble of shallow decision trees, with each iteration using the error residuals of the previous model to fit the next model. The final prediction is a weighted sum of all of the tree predictions. Random forest “bagging” minimizes the variance and overfitting, while GBDT “boosting” minimizes the bias and underfitting.
Keyboard shortcuts
Press
?
to toggle this panel.
Markdown formatting
You can format text using markdown.
H1
# Heading 1
H2
## Heading 2
H3
### Heading 3
New Side
---
Shift
Bold
**bold**
Ctrl
b
Italic
_italic_
Ctrl
i
Strikethrough
~~strikethrough~~
Highlight
==highlighted text==
Link
[link text](my-site.com)
https://my-site.com
Reference
[[card title|id]]
[[id]]
[[id:embed]]
@
Media
![title](Media URL)
Ctrl
o
List
- list item
Ctrl
.
Ordered List
1. list item
Quote
> quote
Definition List
term
: definition
Ctrl
d
Hidden text
{{text}}
Ctrl
l
Hidden text group
{{1::text}}
Ctrl
1
...
9
Code
`code`
Code Fence
```clj
(defn foo)
```
Ctrl
Shift
f
Inline Latex
$\LaTeX$
Display Latex
$$
\LaTeX
$$
Superscript
20^th^
Subscript
H~2~0
Ruby characters
{振り仮名}(ふがな)
Tag
#my-new-tag
Editor
These shortcuts are usable within the editor.
Save and exit
Ctrl
Enter
Exit without saving
Esc
Open reference
Ctrl
Click
New card (from selection)
Ctrl
n
Deck page
These shortcuts are usable on the deck page.
Add a card
n
Review cards that are due
r
Learn new cards
l
List view (select mode)
Shortcuts to move around, select, and edit cards on the list view.
Move selection
j
k
Move card up
Ctrl
k
/
Ctrl
Move card down
Ctrl
j
/
Ctrl
Cancel selection
Esc
Edit card
e
Preview card
Space
Delete card
d
Notebook view (select mode)
Shortcuts to move around, select, and edit cards on the notebook view.
Move selection
j
k
Move card up
Ctrl
k
/
Ctrl
Move card down
Ctrl
j
/
Ctrl
Cancel selection
Esc
Edit card
e
Preview card
Space
Delete card
d
Add card below
o
Add card above
Shift
o
Notebook view (edit mode)
Shortcuts to use while editing a card in notebook view.
Add a new side
Ctrl
-
Split card at cursor
Ctrl
Shift
Save and add a new card
Ctrl
Save and finish editing.
Esc
Deck page (column view)
Shortcuts to move around, select, and edit cards on the column view.
Move selection
j
k
Move card up
Ctrl
k
/
Ctrl
Move card down
Ctrl
j
/
Ctrl
Cancel selection
Esc
Edit card
e
Delete card
d
Review / Learn page
These shortcuts are usable on the review or 'learn new cards' pages.
Show next side of card
Space
j
Hide last side
k
Mark as "remembered"
Space
Skip sides and mark as "remembered"
Shift
Space
Mark as "forgot"
f
Archive card
a
Toggle Reverse Reviews
v
Edit card
e
Delete card
d
Replay audio
r
Pause/play audio
p
Miscellaneous
Various other shortcuts.
Exit modal
Esc
Save editor
Ctrl
Show this panel
?
77