در اینجا 10 اصطلاح رایج هوش مصنوعی توضیح داده شده است که به راحتی قابل درک است.
لطفا هر کسی میتونه هر کدومو ترجمه منطقی بنویسه پای همین پست بزاره تا بعدا منتشر کنم.
A Machine Learning task which seeks to classify data points into different groups (called targets or class labels) that are pre-determined by the training data. For example, if we have a medical dataset consisting of biological measurements...... (heart rate, body temperature, age, height, weight, etc.) and whether or not a person has a specific disease, we could train a classification model to predict whether or not a person has the disease given just the biological measurements.
A supervised learning task that tries to predict a numerical result given a data point. For example, giving the description of a house (location, number of rooms, energy label) and predicting the market price of the house.
A phenomenon in which a Machine Learning algorithm is not fitted well enough to the training data, resulting in low performance on both the training data and similar but distinct data.3. Underfitting cont. A common example of underfitting occurs when a neural network is not trained long enough or when there is not enough training data. The converse phenomenon is overfitting.
A phenomenon in which a Machine Learning algorithm is too fitted to the training data, making performance on the training data very high, but performance on similar but distinct data low due to poor generalizability.A common example of overfitting occurs when a neural network is trained for too long. The converse phenomenon is underfitting.
This is what Machine Learning algorithms are trying to minimize to achieve the best performance. It is simply the error the algorithm makes over a given dataset. It is also sometimes referred to as “loss function.”
A (generally continuous) value that is a computation-friendly proxy for the performance metric. It measures the error between values predicted by the model and the true values we want the model to predict.During training, this value is minimized. “Loss function” is sometimes used interchangeably with “cost function,” although the two are differentiated in some contexts.
A subset of data that a model is not trained on but is used during training to verify that the model performs well on distinct data. Validation data is used for hyper parameter tuning in order to avoid over fitting.
A specific type of Machine Learning algorithm which can be represented graphically as a network, inspired by the way that biological brains work.The network represents many simple mathematical operations (addition, multiplication, etc.) that are combined to produce a complex operation that may perform a complicated task (e.g. identifying cars in an image).
Generally refers to the numbers in a neural network or Machine Learning algorithm that are changed to alter how the model behaves (sometimes also called weights). If a neural network is analogous to a radio, providing the base structure of a system, then parameters are analogous to the knobs on the radio, which are tuned to achieve a specific behavior (like tuning in to a specific frequency). Parameters are not set by the creator of the model, rather, the values are determined by the training process automatically.
A value that takes part in defining the overall structure of a model or behavior of an algorithm. Hyperparameters are not altered by the model training process and are set ahead of time before training. Many potential values for hyperparameters are generally tested to find those that optimize the training process. E.g, in a neural network, the number of layers is a hyperparameter (not altered by training), whereas the values within the layers (“weights”) themselves are parameters (altered by training).If the model is a radio, then a hyperparameter would be the number of knobs on the radio, while the values of these knobs would be parameters.