CS224d-Lecture note.pdf

上传人:大张伟 文档编号:6061384 上传时间:2020-09-03 格式:PDF 页数:48 大小:2.35MB
返回 下载 相关 举报
CS224d-Lecture note.pdf_第1页
第1页 / 共48页
CS224d-Lecture note.pdf_第2页
第2页 / 共48页
CS224d-Lecture note.pdf_第3页
第3页 / 共48页
CS224d-Lecture note.pdf_第4页
第4页 / 共48页
CS224d-Lecture note.pdf_第5页
第5页 / 共48页
点击查看更多>>
资源描述

《CS224d-Lecture note.pdf》由会员分享,可在线阅读,更多相关《CS224d-Lecture note.pdf(48页珍藏版)》请在三一文库上搜索。

1、CS 224D: Deep Learning for NLP1 1Course Instructor: Richard Socher Lecture Notes: Part I2 2Authors: Francois Chaubard, Rohit Mundra, Richard Socher Spring 2015 Keyphrases: Natural Language Processing. Word Vectors. Singu- lar Value Decomposition. Skip-gram. Continuous Bag of Words (CBOW). Negative S

2、ampling. This set of notes begins by introducing the concept of Natural Language Processing (NLP) and the problems NLP faces today. We then move forward to discuss the concept of representing words as numeric vectors. Lastly, we discuss popular approaches to designing word vectors. 1Introduction to

3、Natural Language Processing Natural Language Processing tasks come in varying levels of diffi culty: Easy Spell Checking Keyword Search Finding Synonyms Medium Parsing information from websites, documents, etc. Hard Machine Translation Semantic Analysis Coreference Question Answering We begin with a

4、 general discussion of what is NLP. The goal of NLP is to be able to design algorithms to allow computers to understand natural language in order to perform some task. Example tasks come in varying level of diffi culty: Easy Spell Checking Keyword Search Finding Synonyms Medium Parsing information f

5、rom websites, documents, etc. Hard Machine Translation (e.g. Translate Chinese text to English) Semantic Analysis (What is the meaning of query statement?) Coreference (e.g. What does he or it refer to given a docu- ment?) Question Answering (e.g. Answering Jeopardy questions). The fi rst and arguab

6、ly most important common denominator across all NLP tasks is how we represent words as input to any and all of our models. Much of the earlier NLP work that we will not cover treats words as atomic symbols. To perform well on most NLP tasks we fi rst need to have some notion of similarity and differ

7、ence cs 224d: deep learning for nlp2 between words. With word vectors, we can quite easily encode this ability in the vectors themselves (using distance measures such as Jaccard, Cosine, Euclidean, etc). 2Word Vectors There are an estimated 13 million tokens for the English language but are they all

8、 completely unrelated? Feline to cat, hotel to motel? I think not. Thus, we want to encode word tokens each into some vector that represents a point in some sort of word space. This is paramount for a number of reasons but the most intuitive reason is that perhaps there actually exists some N-dimens

9、ional space (such that N? 13 million) that is suffi cient to encode all semantics of our language. Each dimension would encode some meaning that we transfer using speech. For instance, semantic dimensions might indicate tense (past vs. present vs. future), count (singular vs. plural), and gender (ma

10、sculine vs. feminine).One-hot vector: Represent every word as an R|V|1vector with all 0s and one 1 at the index of that word in the sorted english language. So lets dive into our fi rst word vector and arguably the most simple, the one-hot vector: Represent every word as an R|V|1vector with all 0s a

11、nd one 1 at the index of that word in the sorted english language. In this notation,|V|is the size of our vocabulary. Word vectors in this type of encoding would appear as the following: waardvark= 1 0 0 . . . 0 , wa= 0 1 0 . . . 0 , wat= 0 0 1 . . . 0 ,wzebra= 0 0 0 . . . 1 Fun fact: The term one-h

12、ot comes from digital circuit design, meaning a group of bits among which the legal combinations of values are only those with a single high (1) bit and all the others low (0). We represent each word as a completely independent entity. As we previously discussed, this word representation does not gi

13、ve us directly any notion of similarity. For instance, (whotel)Twmotel= (whotel)Twcat=0 So maybe we can try to reduce the size of this space from R|V|to something smaller and thus fi nd a subspace that encodes the rela- tionships between words. 3SVD Based Methods For this class of methods to fi nd w

14、ord embeddings (otherwise known as word vectors), we fi rst loop over a massive dataset and accumu- late word co-occurrence counts in some form of a matrix X, and then perform Singular Value Decomposition on X to get a USVTdecom- position. We then use the rows of U as the word embeddings for all cs

15、224d: deep learning for nlp3 words in our dictionary. Let us discuss a few choices of X. 3.1Word-Document Matrix As our fi rst attempt, we make the bold conjecture that words that are related will often appear in the same documents. For instance, banks, bonds, stocks, money, etc. are probably likely

16、 to ap- pear together. But banks, octopus, banana, and hockey would probably not consistently appear together. We use this fact to build a word-document matrix, X in the following manner: Loop over billions of documents and for each time word i appears in docu- ment j, we add one to entry Xij. This

17、is obviously a very large matrix (R|V|M) and it scales with the number of documents (M). So per- haps we can try something better. 3.2Word-Word Co-occurrence Matrix The same kind of logic applies here however, the matrix X stores co-occurrences of words thereby becoming an affi nity matrix. We displ

18、ay an example one below. Let our corpus contain just three sentences:Using Word-Word Co-occurrence Matrix: Generate|V| |V|co-occurrence matrix, X. Apply SVD on X to get X=USVT. Select the fi rst k columns of U to get a k-dimensional word vectors. ki=1i |V| i=1i indicates the amount of variance captu

19、red by the fi rst k dimensions. 1. I enjoy fl ying. 2. I like NLP. 3. I like deep learning. The resulting counts matrix will then be: X= IlikeenjoydeeplearningNLPflying. I02100000 like20010100 enjoy10000010 deep01001000 learning00010001 NLP01000001 flying00100001 .00001110 We now perform SVD on X, o

20、bserve the singular values (the diag- onal entries in the resulting S matrix), and cut them off at some index k based on the desired percentage variance captured: k i=1i |V| i=1i cs 224d: deep learning for nlp4 We then take the submatrix of U1:|V|,1:kto be our word embedding matrix. This would thus

21、give us a k-dimensional representation of every word in the vocabulary. Applying SVD to X: |V| |V|X = |V| | |V|u1u2 | |V| 10 |V|02 . . . . . . . |V| v1 |V|v2 . . . Reducing dimensionality by selecting fi rst k singular vectors: |V| |V| X = k | |V|u1u2 | k 10 k02 . . . . . . . |V| v1 kv2 . . . Both o

22、f these methods give us word vectors that are more than suffi cient to encode semantic and syntactic (part of speech) informa- tion but are associated with many other problems: The dimensions of the matrix change very often (new words are added very frequently and corpus changes in size). The matrix

23、 is extremely sparse since most words do not co-occur. The matrix is very high dimensional in general (106106) Quadratic cost to train (i.e. to perform SVD) Requires the incorporation of some hacks on X to account for the drastic imbalance in word frequency Some solutions to exist to resolve some of

24、 the issues discussed above: Ignore function words such as the, he, has, etc. Apply a ramp window i.e. weight the co-occurrence count based on distance between the words in the document. Use Pearson correlation and set negative counts to 0 instead of using just raw count. As we see in the next secti

25、on, iteration based methods solve many of these issues in a far more elegant manner. cs 224d: deep learning for nlp5 4Iteration Based Methods Let us step back and try a new approach. Instead of computing and storing global information about some huge dataset (which might be billions of sentences), w

26、e can try to create a model that will be able to learn one iteration at a time and eventually be able to encode the probability of a word given its context.Context of a word: The context of a word is the set of C surrounding words. For instance, the C=2 context of the word fox in the sentence The qu

27、ick brown fox jumped over the lazy dog is quick, brown, jumped, over. We can set up this probabilistic model of known and unknown parameters and take one training example at a time in order to learn just a little bit of information for the unknown parameters based on the input, the output of the mod

28、el, and the desired output of the model. At every iteration we run our model, evaluate the errors, and follow an update rule that has some notion of penalizing the model parameters that caused the error. This idea is a very old one dating back to 1986. We call this method backpropagating the errors

29、(see Learning representations by back-propagating errors. David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams (1988).) 4.1Language Models (Unigrams, Bigrams, etc.) First, we need to create such a model that will assign a probability to a sequence of tokens. Let us start with an example: T

30、he cat jumped over the puddle. A good language model will give this sentence a high probability because this is a completely valid sentence, syntactically and semanti- cally. Similarly, the sentence stock boil fi sh is toy should have a very low probability because it makes no sense. Mathematically,

31、 we can call this probability on any given sequence of n words: P(w(1),w(2),w(n) We can take the unary language model approach and break apart this probability by assuming the word occurrences are completely independent: P(w(1),w(2),w(n) = n i=1 P(w(i) Unigram model: P(w(1),w(2),w(n) = n i=1 P(w(i)

32、However, we know this is a bit ludicrous because we know the next word is highly contingent upon the previous sequence of words. And the silly sentence example might actually score highly. So per- haps we let the probability of the sequence depend on the pairwise cs 224d: deep learning for nlp6 prob

33、ability of a word in the sequence and the word next to it. We call this the bigram model and represent it as: P(w(1),w(2),w(n) = n i=2 P(w(i)|w(i1) Bigram model: P(w(1),w(2),w(n) = n i=2 P(w(i)|w(i1) Again this is certainly a bit naive since we are only concerning ourselves with pairs of neighboring

34、 words rather than evaluating a whole sentence, but as we will see, this representation gets us pretty far along. Note in the Word-Word Matrix with a context of size 1, we basically can learn these pairwise probabilities. But again, this would require computing and storing global information about a

35、 massive dataset. Now that we understand how we can think about a sequence of tokens having a probability, let us observe some example models that could learn these probabilities. 4.2Continuous Bag of Words Model (CBOW) One approach is to treat The, cat, over, the, puddle as a context and from these

36、 words, be able to predict or generate the center word jumped. This type of model we call a Continuous Bag of Words (CBOW) Model.CBOW Model: Predicting a center word form the surrounding context Lets discuss the CBOW Model above in greater detail. First, we set up our known parameters. Let the known

37、 parameters in our model be the sentence represented by one-hot word vectors. The input one hot vectors or context we will represent with an x(i). And the output as y(i)and in the CBOW model, since we only have one output, so we just call this y which is the one hot vector of the known center word.

38、Now lets defi ne our unknowns in our model.Notation for CBOW Model: w(i): Word i from vocabulary V W(1)Rn|V|: Input word matrix u(i): i-th column of W(1), the input vector representation of word w(i) W(2)Rn|V|: Output word matrix v(i): i-th row of W(2), the output vector representation of word w(i)

39、We create two matrices, W(1)Rn|V|and W(2)R|V|n. Where n is an arbitrary size which defi nes the size of our embedding space. W(1)is the input word matrix such that the i-th column of W(1)is the n-dimensional embedded vector for word w(i)when it is an input to this model. We denote this n1 vector as

40、u(i). Similarly, W(2)is the output word matrix. The j-th row of W(2)is an n-dimensional embedded vector for word w(j)when it is an output of the model. We denote this row of W(2)as v(j). Note that we do in fact learn two vectors for every word w(i)(i.e. input word vector u(i)and output word vector v

41、(i). We breakdown the way this model works in these steps: 1. We generate our one hot word vectors (x(iC),.,x(i1),x(i+1),.,x(i+C) for the input context of size C. cs 224d: deep learning for nlp7 2. We get our embedded word vectors for the context (u(iC)= W(1)x(iC), u(iC+1)=W(1)x(iC+1), ., u(i+C)=W(1

42、)x(i+C) 3. Average these vectors to get h= u(iC)+u(iC+1)+.+u(i+C) 2C 4. Generate a score vector z=W(2)h 5. Turn the scores into probabilities y=softmax(z) 6. We desire our probabilities generated, y, to match the true prob- abilities, y, which also happens to be the one hot vector of the actual word

43、. Figure 1: This image demonstrates how CBOW works and how we must learn the transfer matrices So now that we have an understanding of how our model would work if we had a W(1)and W(2), how would we learn these two matrices? Well, we need to create an objective function. Very often when we are tryin

44、g to learn a probability from some true probability, we look to information theory to give us a measure of the distance between two distributions. Here, we use a popular choice of dis- tance/loss measure, cross entropy H( y,y). The intuition for the use of cross-entropy in the discrete case can be d

45、erived from the formulation of the loss function: H( y,y) = |V| j=1 yjlog( y j) Let us concern ourselves with the case at hand, which is that y is a one-hot vector. Thus we know that the above loss simplifi es to simply: H( y,y) = yilog( y i) In this formulation, i is the index where the correct wor

46、ds one hot vector is 1. We can now consider the case where our predic- tion was perfect and thus yi=1. We can then calculate H( y,y) = 1log(1) =0. Thus, for a perfect prediction, we face no penalty or loss. Now let us consider the opposite case where our prediction was very bad and thus yi=0.01. As

47、before, we can calculate our loss to be H( y,y) = 1log(0.01) 4.605. We can thus see that for proba- bility distributions, cross entropy provides us with a good measure of cs 224d: deep learning for nlp8 distance. We thus formulate our optimization objective as: minimize J= logP(w(i)|w(iC),.,w(i1),w(

48、i+1),.,w(i+C) = logP(v(i)|h) = log exp(v(i)Th) |V| j=1exp(v(i)Tu(j) = v(i)Th+log |V| j=1 exp(v(i)Tu(j) Since we use gradient descent to update word vectors all relevant word vectors v(i)and u(j), we calculate the gradients in the following manner: To be added after Assignment 1 is graded 4.3Skip-Gra

49、m Model Skip-Gram Model: Predicting surrounding context words given a center word Another approach is to create a model such that given the center word jumped, the model will be able to predict or generate the surrounding words The, cat, over, the, puddle. Here we call the word jumped the context. We call this type of model a Skip- Gram model.Notation for

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 科普知识


经营许可证编号:宁ICP备18001539号-1