**Logistics:**

- No final exam
- Four assignments (plus one optional)
- Final project
- Either open-ended research project or replicating/extending papers

- Small participation scores

**Content:**

- Course materials have been revamped because of LLMs
- Went over common use cases of LLMs, failure modes

**Classical NLP day!**

- Language modeling task: estimate a probability function for all possible strings
- Idea: use an emperical distribution over traning sentences
- No generalization (literally cannot be out-of-distribution)

- BoW (Unigram) models:
- Append a
`STOP`

token to all sentences - Multiply probability of all tokens
**Not**conditional- Word order doesn’t matter

- Append a
- Bigram models:
- Append
`STOP`

token and prepend`START`

token - Conditioned on previous token
- Same sort of autoregressive generation

- Append
- N-gram models:
- Condition on the last n tokens when generating next token
- Practically cannot go beyond 3 or 4-gram, Google got ~9-gram
- Learned size is: \(|\text{dict size}|^N\)
- Grows exponentially

- This is the MLE

**Finish classical NLP day!**

- While the data we have for N-gram models doesn’t change, the data per situation will decrease as N increases (and thus cause problems)
*“The Shannon Game”*- How well can we predict the next word?
- Given a prefix, what’s the next word?

- We use typical ML/DL data splits for performance evaluation
- Train, dev, test sets

- Measures of fit:
- Likelihood, (negative) log-liklihood
- Perplexity:
- Inverse probability of the data, normalized by number of tokens
- \(PP_M(D') = p_M(D')^{-\frac{1}{N}}\)
- “Geometric mean”
- The amount of “surprise” in the data
- Exponentiated cross-entropy! \(PP(X) = \exp(H(X))\)
- Example: perplexity of Uniform with \(i\) outputs is \(i\) (kinda like how hard is it to recognize)
- Perplexity represents the branching factor of data
- Frontier LLMs have perplexity \(<10\)

- Cross-entropy:
- For two prob distributions \(P, Q\) you want to compare, cross entropy is: \(H(P, Q) = -\mathbb{E}_p[\log q] = - \sum_{x \in X} p(x) \log q(x)\)
- Non-symmetric
- If we want to model natural language distribution \(L\) and fit \(P_M\) to minimize cross-entropy: \(H(L, P_M)\)
- Use
*Monte-Carlo*estimate for \(L\), assume \(l(x_i \simeq \frac{1}{N}\) over some dataset \(D\) drawn from \(L\) - Basically just saying all phrases have an equal chance of appearing
- \(H(L, P_M) \simeq = -\sum_{i=1}^N \frac{1}{N} \log p_M(x_i | x_0, \dots, x_{i-1})\)

- Use

- Intrinsic evaluation: more about measuring facts about the underlying model
- Extrinsic evaluation: more application and context focused

- Token distribution follows Zipf’s distribution, kinda power-law

- We can never have a probability of zero for models:
- Laplace smoothing: (add one)
- Add-k: add some \(0 < k < 1\)
- Better but still not used

- Convex combinations (?)
- Take combination of multiple distributions
- Better than add-k typically
- Optimal hyperparameter search: human guessing
- E.g. linear combination of 1-gram, 2-gram, 3-gram distributions

- Unknown token
`<UNK>`

, known as “UNC!”`<UNK>`

- Token representing out-of-vocab tokens
- Model can generate
`<UNK>`

as well

**Finally done with lecture 2!!!**

- 470,000 words in Webster’s
- Character level tokenization:
- Each token is a character

- BPE:
- Tokenize subwards, similar to a compression algorithm
- We kinda did this in CSE 143
- Greedly concatenating most common token-pairs starting from chars
- Common way to tokenize natural text, perform the same merges as when designing the BPE encoding

- Tokenizers really fit data with weird matchings
- Whitespace and capitalization will typically cause differnt tokenizations
- Tokenization varients:
- Strip whitespace, add spaces during decoding. Requires a “continue word” special character.
- No pretokenization:
- Sentencepiece tokenization
- Generally used for non-english, char-based languages

- Sentencepiece tokenization
- Byte-based variations!
- Using emojis!

- WordPiece: add bigram tokens by normalized probability of occuring
- Tokenizer-free modelling

- One-hot encoding: vector of all zeros, with a one representing index
- Embeddings: basically a differentiable lookup table for token representations
- Have relationships in latent space

- Basic NN review:
- Just simple basics

*Guest Lecture - Peter West*

- “Large” in LLMs: ~GPT-2 size or greater
- Commonsense knowledge:
- Is commonsense knowledge too hard for compact models?
- Can we distill commonsense knowledge to smaller models?

- Knowledge distillation:
- Minimize cross-entropy between teacher and student models
- Can’t do all strings, instead do over subset of important data

- We get a knowledge graph (DS) and student model as output

- Minimize cross-entropy between teacher and student models
- Used GPT-3 to produce massive corpus of commonsense examples:
- Human baseline producing examples: ~86%
- Produced much more (10X) data, with 10% drop in example accuracy
- Attempted to use RoBERTa to filter data
- Increased acc to 10% more than humans, while dropping examples to about half

- Got final student model by finetuning GPT-2 on synthetic data
- Can improve over baseline GPT-3 model

- Humans are good supervisors, not creators (P-NP)
- Spoke about how models can generate better than reason about outputs
- Kinda a reverse P-NP vs humans

*More about neural networks*

- NNs:
- Required:
- Training data
- Model family
- Differentiable loss function

- Learning problem:
- \(\hat{\theta} = \frac{1}{N} \sum_{i=1}^N L(y_i, \hat{y}_i = f(x_i))\)

- Required:
- Common loss functions:
- MSE loss
- L1 loss
- Binary cross-entropy loss: \(-[y \log (\hat{y}) + (1 - y) \log (1 - \hat{y})]\)
- Cross-entropy loss: \(-\sum{i=1}^C y_i \log (\hat{y}_i)\)

- Just use gradient descent (on subbatches)
- Backprop uses the chain rule
- Softmax: \(\text{softmax}(y) = \frac{\exp (y_i)}{\sum_{j=1}^d \exp (y_j)}\)
- Turns logits into probs!

*It is okay to struggle and ask questions!*

- RNNs have a hidden internal state
- This hidden state is modified by the inputs tokens at each time step
- The hidden state is used to generate new tokens at each step

- Training procedure:
- Generate logits, take loss between ground truth and logits
- Backprop and update weights

- SGD:
- Take minibatches

- Vanishing gradients:
- No learning when gradients are zero
- Same weights run many times in a row for RNNs
- If weights are small, grads shrink repeatedly to zero
- If weights are big, grads explode

- LSTM:
- Has hidden representation, many gates determining updates
- LSTM has a forgetting problem
- Not parllelizable

- Transformers/Attention:
- All you need
- Similar to fuzzy lookup for databases
- Generates key, query, value vectors

- Query, key, and value vectors for each token are generated
- Scores are: \(\text{softmax}(\frac{QK^T}{\sqrt{d})\)
- Output are: \(VS^T\)
- Total output is: \[\text{softmax}(\frac{QK^T}{\sqrt{d}})V\]

- Limitations of transformers:
- No sequence length: position embedding
- At input, add a “positional embedding” to each input token
- Sinuoidal embeddings, not learnable but can support any length
- Learned embeddings: positions can be learned, fixed length

- No nonlinearities: add FF layers between attention blocks
- Looking into the future: mask scores before softmax

- No sequence length: position embedding
- Transformers also have residual connections
- Multi-head attention:
- Run multiple “attention” operations in parallel, concatenate results
- Can all be done in parallel

- Layer norm: normalize the values by layer
- Transformer decoder: don’t use masking
- Transformer encoder-decoder: we use cross-attention
- Decoder key and values come from the encoder

- Much of this class is about learning how to get better, vs just the material
- Supervised learning: struggles out of domain
- Self-supervised pretraining can be used for transformers
- Catastropic forgetting: when fine-tuning, original performance degrades
- Parameter-efficient fine-tuning: only update a fraction of the weights
- Adapter layers:
- Add additional layers within the model which are trained

- Prefix-tuning:
- Finetuning based on task-prefix prepended to the LLM
- The task-prefix is trained as parameters
- Very clever! Made by an early-year student!

- Prompt-tuning:
- Similar to prefix-tuning, less parameters

- LoRA (Low-Rank Adaptation):
- Learn low-rank “delta” matrix between ideal and current weights
- 10,000x less fine-tuned parameters
- 3x GPU memory saving for GPT-3
- Given pretrained weights \(\mathbb{R}^{d \times d}\)
- Represent using \(A \in \mathbb{R}^{r \times k}, A \sim \mathcal{N}(0, \sigma^2)\) and \(B \in \mathbbb{R}^{d \times r}, B = 0\)
- Only train \(A\) and \(B\)
- QLoRA is quantized LoRA

- Adapter layers:
- Encoder-only models:
- Mask particular words, predict the masked words (~15%)
`<CLS>`

token prepended,`<SEP>`

token between each text- Segment embeddings are added to each of the token sequences

- BERT:
- Predicts random masked words in the middle
- Large emperical gain
- Base: 110M, Large: 340M

- RoBERTa:
- Can continue to train BERT much, much more
- Pretty much always better than BERT
- Is easier to fine-tune vs BERT

- T5:
- Masked variable-length subsequences
- Reconstruct the sentences

- Decoder (GPT models):
- Can use special tokens to do many types of tasks

- Certain types of applications need differing opend-endedness
- Natural Language Generation (NLG):
- Train on NLL with implicit teacher forcing
- Decoding:
- Take in probability distribution, predict which token is next

- Greedy Decoding:
- Predict argmax token

- Beam search:
- For depth, search \(n\) routes with highest prob
- Often go-to decoding algorithm

- One of the interesting problems is how to generate text once we have a model
- Many variations of beam search
- Most likely sequences are repetitive
- When repeating a phrase, it becomes more likely to be generated in the future
- Can avoid repeating N-grams as a simple fix
- Constrastive decoding: find sequence \(x\) such that maximizes \(P_\text{large LM}(x) - \log P_\text{small LM}(x)\)

- Humans don’t say high-prob things…why say something that’s obvious?
- Top-k sampling:
- Truncate the prob distribution to \(k\) tokens, sample from those token probs
- \(k\) can often be too large or small depending on context

- Top-p (nucleus) sampling:
- Instead all tokens from the top probability mass

- Softmax temperature:
- Divide logits by \(t\) before softmax
- Temperature \(t > 1\) is uniform, \(t < 1\) is more spiky

- Generate then re-rank:
- Generate \(n\) sequences with same input
- Re-rank those sequences by some score (e.g. perplexity)

- Speculative decoding:
- Idea: not all tokens are as hard to generate! (and running sequence through LLM is nearly as expensive as one token)
- Use small model to generate \(k\)-length draft
- Run draft through large LM to compute token distribution

- Speculative decoding:
- Cases:
- \(q_i \geq p_i\) accept the token
- \(q_i < p_i\) accept with probability \(\frac{q_i}{p_i}\)

- When rejecting and re-sampling, only sample from the distribution \((q_i - p_i)\) prob of big model minus small model, all non-negative)
- 2~2.5x decoding scheme
- Models must have the same decoding scheme

- Cases:
- Issues (training):
- Teacher-forcing influences models to “trust” previous generations
- MLE discourages unique text generation
- Unlikelihood training:
- Add another loss function which penalizes generation of certain undesirable sequences

- Exposure bias:
- LLMs get fed in “good” tokens, not bad or random ones, unlike during generation
- Scheduled Sampling: with prob \(p\), use decoded token as next input vs true token

- Reinforce algorithm:
- Sample from the model, score the sequence, use score as reward to train
- \(L_{RL} = - \sum_{t=1}^T r(\hat{y}_t) \log P(\hat{y}_t | y*;{\hat{y}_{\leq t})\)
- \(r(\cdot)\) reward model
- \(y*\) input sequence
- \(\hat{y}\) sequence from model given \(y*\)

- RL with powerful models can “reward hack” while not improving at the underlying task
- RLHF - use PPO with human scores
- Summarization: try using autoencoders
- We typically need multiple loss terms to account for fluency, reconstruction, length
- We can’t backprop through token generation
- Solution: Gumbel-Softmax
- \(\y_i \sim \text{softmax}(S_t) \simeq e_t = \sum_{i=1}^{|V|} e(w_i) \text{softmax}(\frac{S_t + \epsilon}{\tau})\)

- Evaluation metrics:
- Content-based overlap: often not great because failure to capture semantics
- Model-based metrics:
- BERTSCORE: use embeddings from Bert
- Use vector sims
- Mauve: … TBC

- Mauve: measure of difference between human (\(P\)) and machine (\(Q\)) text distribution
- KL-divergence: \(KL(P|R_\lambda) = \sum_x P(x) \log \frac{P(x)}{R_\lambda (x)}\)
- Not symmetric, \(R_\lambda(x) \neq 0\)
- Interpolate between the two distributions to draw a curve

- Human Evals:
- Humans have been gold standard, but crowdworkers are now using LLMs

- Alignment datasets:
- Synthetic data: use seq2seq LLMs to generate data from templates
- Convert existing NLP datasets to instruction-datasets
- Allows supervised fine-tuning as well

- Human annotation:
- Use human-generated data

- Other methods involving getting data from frontier models

- Synthetic data: use seq2seq LLMs to generate data from templates
- RLHF:
- Showed RL is powerful in NLP
- Human feedback models perform much better than supervised fine-tuning
- Pipeline:
- Collect human feedback between samples
- Train reward model from those samples
- Run PPO on LLM using reward model

- Use RL and PPO
- Can be done online by continuing to get more data samples, updating reward model with new data

- Human preference:
- Easier to do classification (even as humans) vs regression
- Why RLHF is preference based

- Bradley-Terry model: model of probability one output is better than the other

- Easier to do classification (even as humans) vs regression
- Reward model:
- Can be multi-input classification
- Backbone models are often large LLMs
- Any type of model can be used, just by adding a linear head
- Typically only trained one epoch
- (Somewhat) low accuracy, ~60-80%

- RL with LLMs:
- Basic REINFORCE algo is exactly what you’d expect with LLMs
- Policy Gradient with regularization:
- Learns policy maximizing reward minus KL reg term
- Reference model is fixed for KL-divergence

- PPO:
- Surrogate objective: \(\text{maximize}_\theta \mathbb{\hat{E}}_t [\frac{\pi_\theta(a_t|s_t)}{\pi_{\theta_\text{old}}(a_t|s_t)} \hat{A}_t]\) subject to: KL difference_
- Real (clip) objective:
- Higher is better

- Kinda fucked up to understand, I’ll look more at it though lol
- Also trains value network