Understanding Word Embeddings
Автор: Machine Learning TV
Загружено: 2019-09-15
Просмотров: 10191
Full course link: https://www.coursera.org/learn/intro-...
So, the core idea here is that basically you want the words that have similar neighbores , similar contexts, to be similar in this new virtual representation. Now, let's see how we achieve that. But before we do that, let's actually cover some kind of math of how do we represent the words efficiently.
So have a word named word. And technically, to fit it into TensorFlow, you'd probably have to represent it as some kind of number. For example, the ID of this word in your dictionary. And basically, the way you usually use this word in your pipeline is you take one-hot vectors, this large size of a dictionary vector that only has one nonzero value. And then push it through some kind of linear models or neural networks, or similar stuff. The only problem is, you're actually doing this thing very inefficiently. So you have this one-hot vector, and then you multiply it by a weight vector, or a weight matrix. It actually, it's actually process, because you have a lot of weights that gets multiplied by zeros. Now, you could actually compute this kind of weighted sum much more efficiently. If you look slightly closer, you could actually write the answer, you could actually write the answer itself without any sums or multiplications....
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: