AI Exam Prep - 3D Training
AI Practitioner Exam Prep - Training
Training (Learning) Types Terms
Continued Pre-training = you provide U to pre-train a FM by showing it with certain types of inputs.
Deductive = using general rules to specific outcomes
Emergent = at large scales, these models develop skills that are not explicitly programmed into them.
Federated Learning = instead of bring data to central server (traditional), this brings model to the data. Good for data privacy and local compliance.
"Fine tuning" = improves existing pre-trained LM using L (ex: industry-specific data) or pairs of input and desired output. Most important task for fine tuning is labeling with accurate and relevant labels. Types are instruction tuning, RHLF, adapting models for specific domains, transfer learning, and continuous pretraining.
"Generalization" = model's ability to apply knowledge from training on new unseen data.
Inductive = using evidence to determine outcome. Builds a general model to predict future, unseen data.
Instruction Tuning = method to fine-tune LLMs on instructional prompts and desired outputs.
"Learning Rate" = compares multiple trials to see improvement rate.
"Masking" of Input = intentionally hiding parts of the input, forces models to understand context
Training in ML = iterative teaching a ML model to find patterns, make decisions, or generate content.
Transductive = predicts specific labels for fixed set of U by using both L training and distribution of the U test. Optimizes for performance of specific dataset.
Transfer Learning = takes existing pre-trained model on supervised task and then fine tunes.
Training (Learning) Parameters Terms
Batch Size = Data processed per update. Small=faster iterations and generalization, large=stable and GPU efficiency.
Epochs = Neural networks. One epoch = Full pass through the dataset.
Hyperparameters = Human-set dials.Parameters = Computer-learned weights.
Comments
Post a Comment