Cette page s'applique à la version précédente. La page correspondante en anglais a été supprimée de la version actuelle.
Deep Learning Toolbox Fonctions - Liste alphabétique
A
AcceleratedFunction | Accelerated deep learning function (depuis R2021a) |
activations | Compute deep learning network layer activations |
adamupdate | Update parameters using adaptive moment estimation (Adam) (depuis R2019b) |
adapt | Adapt neural network to data as it is simulated |
adaptwb | Adapt network with weight and bias learning rules |
adddelay | Add delay to neural network response |
addInputLayer | Add input layer to network (depuis R2022b) |
additionLayer | Addition layer |
addLayers | Add layers to layer graph or network |
addMetrics | Compute additional classification performance metrics (depuis R2022b) |
addParameter | Add parameter to ONNXParameters object (depuis R2020b) |
alexnet | AlexNet convolutional neural network |
analyzeNetwork | Analyze deep learning network architecture |
assembleNetwork | Assemble deep learning network from pretrained layers |
attention | Dot-product attention (depuis R2022b) |
audioDataAugmenter | Augment audio data (depuis R2019b) |
audioDatastore | Datastore for collection of audio files |
audioFeatureExtractor | Streamline audio feature extraction (depuis R2019b) |
augment | Apply identical random transformations to multiple images |
augmentedImageDatastore | Transform batches to augment image data |
augmentedImageSource | (To be removed) Generate batches of augmented image data |
Autoencoder | Autoencoder class |
average | Compute performance metrics for average receiver operating characteristic (ROC) curve in multiclass problem (depuis R2022b) |
averagePooling1dLayer | 1-D average pooling layer (depuis R2021b) |
averagePooling2dLayer | Average pooling layer |
averagePooling3dLayer | 3-D average pooling layer (depuis R2019a) |
avgpool | Pool data to average values over spatial dimensions (depuis R2019b) |
B
batchnorm | Normalize data across all observations for each channel independently (depuis R2019b) |
batchNormalizationLayer | Batch normalization layer |
bilstmLayer | Bidirectional long short-term memory (BiLSTM) layer for recurrent neural network (RNN) |
blockedImageDatastore | Datastore for use with blocks from blockedImage
objects (depuis R2021a) |
boxdist | Distance between two position vectors |
boxLabelDatastore | Datastore for bounding box label data (depuis R2019b) |
bttderiv | Backpropagation through time derivative function |
C
calibrate | Simulate and collect ranges of a deep neural network (depuis R2020a) |
cascadeforwardnet | Generate cascade-forward neural network |
catelements | Concatenate neural network data elements |
catsamples | Concatenate neural network data samples |
catsignals | Concatenate neural network data signals |
cattimesteps | Concatenate neural network data timesteps |
cellmat | Créer un cell array de matrices |
checkLayer | Check validity of custom or function layer |
classificationLayer | Classification output layer |
ClassificationOutputLayer | Classification layer |
classify | Classify data using trained deep learning neural network |
classifyAndUpdateState | Classify data using a trained recurrent neural network and update the network state |
classifySound | Classify sounds in audio signal (depuis R2020b) |
clearCache | Clear accelerated deep learning function trace cache (depuis R2021a) |
clippedReluLayer | Clipped Rectified Linear Unit (ReLU) layer |
closeloop | Convert neural network open-loop feedback to closed loop |
codegen | Generate C/C++ code from MATLAB code |
coder.DeepLearningConfig | Create deep learning code generation configuration objects |
coder.getDeepLearningLayers | Get the list of layers supported for code generation for a specific deep learning library |
coder.loadDeepLearningNetwork | Load deep learning network model |
combvec | Créer toutes les combinaisons de vecteurs |
compet | Competitive transfer function |
competlayer | Competitive layer |
compressNetworkUsingProjection | Compress neural network using projection (depuis R2022b) |
con2seq | Convert concurrent vectors to sequential vectors |
concatenationLayer | Concatenation layer (depuis R2019a) |
concur | Create concurrent bias vectors |
configure | Configure network inputs and outputs to best match input and target data |
confusion | Matrice de confusion pour la classification |
confusionchart | Create confusion matrix chart for classification problem |
confusionmat | Compute confusion matrix for classification problem |
connectLayers | Connect layers in layer graph or network |
convolution1dLayer | 1-D convolutional layer (depuis R2021b) |
convolution2dLayer | 2-D convolutional layer |
convolution3dLayer | 3-D convolutional layer (depuis R2019a) |
convwf | Convolution weight function |
countlabels | Count number of unique labels (depuis R2021a) |
crepe | CREPE neural network (depuis R2021a) |
crepePostprocess | Postprocess output of CREPE deep learning network (depuis R2021a) |
crepePreprocess | Preprocess audio for CREPE deep learning network (depuis R2021a) |
crop2dLayer | 2-D crop layer |
crop3dLayer | 3-D crop layer (depuis R2019b) |
crosschannelnorm | Cross channel square-normalize using local responses (depuis R2020a) |
crossChannelNormalizationLayer | Channel-wise local response normalization layer |
crossentropy | Cross-entropy loss for classification tasks (depuis R2019b) |
crossentropy | Neural network performance |
ctc | Connectionist temporal classification (CTC) loss for unaligned sequence classification (depuis R2021a) |
cwt | Continuous 1-D wavelet transform |
cwtLayer | Continuous wavelet transform (CWT) layer (depuis R2022b) |
D
DAGNetwork | Directed acyclic graph (DAG) network for deep learning |
darknet19 | DarkNet-19 convolutional neural network (depuis R2020a) |
darknet53 | DarkNet-53 convolutional neural network (depuis R2020a) |
decode | Decode encoded data |
deepDreamImage | Visualize network features using deep dream |
deeplabv3plusLayers | Create DeepLab v3+ convolutional neural network for semantic image segmentation (depuis R2019b) |
defaultderiv | Default derivative function |
densenet201 | DenseNet-201 convolutional neural network |
depthConcatenationLayer | Depth concatenation layer |
detect | Detect objects using PointPillars object detector (depuis R2021b) |
detectTextCRAFT | Detect texts in images by using CRAFT deep learning model (depuis R2022a) |
dims | Étiquettes des dimensions de dlarray (depuis R2019b) |
disconnectLayers | Disconnect layers in layer graph or network |
dist | Fonction de pondération de la distance euclidienne |
distdelaynet | Distributed delay network |
divideblock | Divide targets into three sets using blocks of indices |
divideind | Divide targets into three sets using specified indices |
divideint | Divide targets into three sets using interleaved indices |
dividerand | Diviser des cibles en trois jeux avec des indices aléatoires |
dividetrain | Assign all targets to training set |
dlaccelerate | Accelerate deep learning function for custom training loops (depuis R2021a) |
dlarray | Deep learning array for customization (depuis R2019b) |
dlconv | Deep learning convolution (depuis R2019b) |
dlcwt | Deep learning continuous wavelet transform (depuis R2022b) |
dlfeval | Evaluate deep learning model for custom training loops (depuis R2019b) |
dlgradient | Compute gradients for custom training loops using automatic differentiation (depuis R2019b) |
dlhdl.Target | Configure interface to target board for workflow deployment (depuis R2020b) |
dlhdl.Workflow | Configure deployment workflow for deep learning neural network (depuis R2020b) |
dlmodwt | Deep learning maximal overlap discrete wavelet transform and multiresolution analysis (depuis R2022a) |
dlmtimes | (Not recommended) Batch matrix multiplication for deep learning (depuis R2020a) |
dlnetwork | Deep learning network for custom training loops (depuis R2019b) |
dlode45 | Deep learning solution of nonstiff ordinary differential equation (ODE) (depuis R2021b) |
dlquantizationOptions | Options for quantizing a trained deep neural network (depuis R2020a) |
dlquantizer | Quantize a deep neural network to 8-bit scaled integer data types (depuis R2020a) |
dlstft | Deep learning short-time Fourier transform (depuis R2021a) |
dltranspconv | Deep learning transposed convolution (depuis R2019b) |
dlupdate | Update parameters using custom function (depuis R2019b) |
doc2sequence | Convert documents to sequences for deep learning |
dotprod | Dot product weight function |
dropoutLayer | Dropout layer |
E
edfheader | Create header structure for EDF or EDF+ file (depuis R2021a) |
edfinfo | Get information about EDF/EDF+ file (depuis R2020b) |
edfread | Read data from EDF/EDF+ file (depuis R2020b) |
edfwrite | Create or modify EDF or EDF+ file (depuis R2021a) |
efficientnetb0 | EfficientNet-b0 convolutional neural network (depuis R2020b) |
elliot2sig | Elliot 2 symmetric sigmoid transfer function |
elliotsig | Elliot symmetric sigmoid transfer function |
elmannet | Elman neural network |
eluLayer | Exponential linear unit (ELU) layer (depuis R2019a) |
embed | Embed discrete data (depuis R2020b) |
encode | Encode input data |
equalizeLayers | Equalize layer parameters of deep neural network (depuis R2022b) |
errsurf | Error surface of single-input neuron |
estimateNetworkMetrics | Estimate network metrics for specific layers of a neural network (depuis R2022a) |
estimateNetworkOutputBounds | Estimate output bounds of deep learning network (depuis R2022b) |
experiments.Monitor | Update results table and training plots for custom training experiments (depuis R2021a) |
exportNetworkToTensorFlow | Export Deep Learning Toolbox network or layer graph to TensorFlow (depuis R2022b) |
exportONNXNetwork | Export network to ONNX model format |
extendts | Extend time series data to given number of timesteps |
extractdata | Extract data from dlarray (depuis R2019b) |
F
fasterRCNNObjectDetector | Detect objects using Faster R-CNN deep learning detector |
fastRCNNObjectDetector | Detect objects using Fast R-CNN deep learning detector |
fastTextWordEmbedding | Pretrained fastText word embedding |
fcddAnomalyDetector | Detect anomalies using fully convolutional data description (FCDD) network for anomaly detection (depuis R2022b) |
featureInputLayer | Feature input layer (depuis R2020b) |
feedforwardnet | Générer un réseau de neurones feedforward |
filenames2labels | Get list of labels from filenames (depuis R2022b) |
finddim | Find dimensions with specified label (depuis R2019b) |
findPlaceholderLayers | Find placeholder layers in network architecture imported from Keras or ONNX |
fitnet | Réseau de neurones pour l'ajustement de fonction |
fixunknowns | Process data by marking rows with unknown values |
flattenLayer | Flatten layer (depuis R2019a) |
folders2labels | Get list of labels from folder names (depuis R2021a) |
formwb | Form bias and weights into single vector |
forward | Compute deep learning network output for training (depuis R2019b) |
fpderiv | Forward propagation derivative function |
freezeParameters | Convert learnable network parameters in ONNXParameters to
nonlearnable (depuis R2020b) |
fromnndata | Convert data from standard neural network cell array form |
fullyconnect | Sum all weighted input data and apply a bias (depuis R2019b) |
fullyConnectedLayer | Fully connected layer |
functionLayer | Function layer (depuis R2021b) |
functionToLayerGraph | (To be removed) Convert deep learning model function to a layer graph (depuis R2019b) |
G
gadd | Generalized addition |
gdivide | Generalized division |
gelu | Apply Gaussian error linear unit (GELU) activation (depuis R2022b) |
geluLayer | Gaussian error linear unit (GELU) layer (depuis R2022b) |
generateFunction | Generate a MATLAB function to run the autoencoder |
generateSimulink | Generate a Simulink model for the autoencoder |
genFunction | Generate MATLAB function for simulating shallow neural network |
gensim | Generate Simulink block for shallow neural network simulation |
getelements | Get neural network data elements |
getL2Factor | Get L2 regularization factor of layer learnable parameter |
getLearnRateFactor | Get learn rate factor of layer learnable parameter |
getsamples | Get neural network data samples |
getsignals | Get neural network data signals |
getsiminit | Get Simulink neural network block initial input and layer delays states |
gettimesteps | Get neural network data timesteps |
getwb | Get network weight and bias values as single vector |
globalAveragePooling1dLayer | 1-D global average pooling layer (depuis R2021b) |
globalAveragePooling2dLayer | 2-D global average pooling layer (depuis R2019b) |
globalAveragePooling3dLayer | 3-D global average pooling layer (depuis R2019b) |
globalMaxPooling1dLayer | 1-D global max pooling layer (depuis R2021b) |
globalMaxPooling2dLayer | Global max pooling layer (depuis R2020a) |
globalMaxPooling3dLayer | 3-D global max pooling layer (depuis R2020a) |
gmultiply | Generalized multiplication |
gnegate | Generalized negation |
googlenet | Réseau de neurones à convolution GoogLeNet |
gpu2nndata | Reformat neural data back from GPU |
gradCAM | Explain network predictions using Grad-CAM (depuis R2021a) |
gridtop | Grid layer topology function |
groupedConvolution2dLayer | 2-D grouped convolutional layer (depuis R2019a) |
groupnorm | Normalize data across grouped subsets of channels for each observation independently (depuis R2020b) |
groupNormalizationLayer | Group normalization layer (depuis R2020b) |
groupSubPlot | Group metrics in experiment training plot (depuis R2021a) |
groupSubPlot | Group metrics in training plot (depuis R2022b) |
gru | Gated recurrent unit (depuis R2020a) |
gruLayer | Gated recurrent unit (GRU) layer for recurrent neural network (RNN) (depuis R2020a) |
gsqrt | Generalized square root |
gsubtract | Soustraction généralisée |
H
hardlim | Hard-limit transfer function |
hardlims | Symmetric hard-limit transfer function |
hasdata | Determine if minibatchqueue can return mini-batch (depuis R2020b) |
hextop | Hexagonal layer topology function |
huber | Huber loss for regression tasks (depuis R2021a) |
I
image3dInputLayer | 3-D image input layer (depuis R2019a) |
imageDataAugmenter | Configure image data augmentation |
imageInputLayer | Image input layer |
imageLIME | Explain network predictions using LIME (depuis R2020b) |
importCaffeLayers | Import convolutional neural network layers from Caffe |
importCaffeNetwork | Import pretrained convolutional neural network models from Caffe |
importKerasLayers | (To be removed) Import layers from Keras network |
importKerasNetwork | (To be removed) Import pretrained Keras network and weights |
importNetworkFromPyTorch | Import PyTorch network as MATLAB network (depuis R2022b) |
importONNXFunction | Import pretrained ONNX network as a function (depuis R2020b) |
importONNXLayers | (To be removed) Import layers from ONNX network |
importONNXNetwork | (To be removed) Import pretrained ONNX network |
importTensorFlowLayers | (To be removed) Import layers from TensorFlow network (depuis R2021a) |
importTensorFlowNetwork | (To be removed) Import pretrained TensorFlow network (depuis R2021a) |
inceptionresnetv2 | Pretrained Inception-ResNet-v2 convolutional neural network |
inceptionv3 | Inception-v3 convolutional neural network |
ind2vec | Convert indices to vectors |
ind2word | Map encoding index to word |
init | Initialize neural network |
initcon | Conscience bias initialization function |
initialize | Initialize learnable and state parameters of a
dlnetwork (depuis R2021a) |
initlay | Layer-by-layer network initialization function |
initlvq | LVQ weight initialization function |
initnw | Nguyen-Widrow layer initialization function |
initwb | By weight and bias layer initialization function |
initzero | Zero weight and bias initialization function |
instancenorm | Normalize across each channel for each observation independently (depuis R2021a) |
instanceNormalizationLayer | Instance normalization layer (depuis R2021a) |
isconfigured | Indicate if network inputs and outputs are configured |
isdlarray | Check if object is dlarray
(depuis R2020b) |
isequal | Check equality of deep learning layer graphs or networks (depuis R2021a) |
isequaln | Check equality of deep learning layer graphs or networks ignoring
NaN values (depuis R2021a) |
isVocabularyWord | Test if word is member of word embedding or encoding |
L
l1loss | L1 loss for regression tasks (depuis R2021b) |
l2loss | L2 loss for regression tasks (depuis R2021b) |
labeledSignalSet | Create labeled signal set |
Layer | Network layer for deep learning |
layerGraph | Graph of network layers for deep learning |
layernorm | Normalize data across all channels for each observation independently (depuis R2021a) |
layerNormalizationLayer | Layer normalization layer (depuis R2021a) |
layrecnet | Layer recurrent neural network |
leakyrelu | Apply leaky rectified linear unit activation (depuis R2019b) |
leakyReluLayer | Leaky Rectified Linear Unit (ReLU) layer |
learncon | Conscience bias learning function |
learngd | Gradient descent weight and bias learning function |
learngdm | Gradient descent with momentum weight and bias learning function |
learnh | Hebb weight learning rule |
learnhd | Hebb with decay weight learning rule |
learnis | Instar weight learning function |
learnk | Kohonen weight learning function |
learnlv1 | LVQ1 weight learning function |
learnlv2 | LVQ2.1 weight learning function |
learnos | Outstar weight learning function |
learnp | Perceptron weight and bias learning function |
learnpn | Normalized perceptron weight and bias learning function |
learnsom | Self-organizing map weight learning function |
learnsomb | Batch self-organizing map weight learning function |
learnwh | Widrow-Hoff weight/bias learning function |
linearlayer | Create linear layer |
linkdist | Link distance function |
loadTFLiteModel | Load TensorFlow Lite model (depuis R2022a) |
logsig | Log-sigmoid transfer function |
lstm | Long short-term memory (depuis R2019b) |
lstmLayer | Long short-term memory (LSTM) layer for recurrent neural network (RNN) |
lstmProjectedLayer | Long short-term memory (LSTM) projected layer for recurrent neural network (RNN) (depuis R2022b) |
lvqnet | Learning vector quantization neural network |
lvqoutputs | LVQ outputs processing function |
M
mae | Mean absolute error performance function |
mandist | Manhattan distance weight function |
mapminmax | Transformer des matrices en mappant des valeurs de ligne minimales et maximales sur [-1 1 ] |
mapstd | Process matrices by mapping each row’s means to 0 and deviations to 1 |
maskrcnn | Detect objects using Mask R-CNN instance segmentation (depuis R2021b) |
matlab.io.datastore.BackgroundDispatchable | (Not recommended) Add prefetch reading support to datastore |
matlab.io.datastore.BackgroundDispatchable.readByIndex | (Not recommended) Return observations specified by index from datastore |
matlab.io.datastore.MiniBatchable | Add mini-batch support to datastore |
matlab.io.datastore.MiniBatchable.read | (Not recommended) Read data from custom mini-batch datastore |
matlab.io.datastore.PartitionableByIndex | (Not recommended) Add parallelization support to datastore |
matlab.io.datastore.PartitionableByIndex.partitionByIndex | (Not recommended) Partition datastore according to indices |
maxlinlr | Maximum learning rate for linear layer |
maxpool | Pool data to maximum value (depuis R2019b) |
maxPooling1dLayer | 1-D max pooling layer (depuis R2021b) |
maxPooling2dLayer | Max pooling layer |
maxPooling3dLayer | 3-D max pooling layer (depuis R2019a) |
maxunpool | Unpool the output of a maximum pooling operation (depuis R2019b) |
maxUnpooling2dLayer | Max unpooling layer |
meanabs | Mean of absolute elements of matrix or matrices |
meansqr | Mean of squared elements of matrix or matrices |
midpoint | Midpoint weight initialization function |
minibatchqueue | Create mini-batches for deep learning (depuis R2020b) |
minmax | Plages des lignes d'une matrice |
mobilenetv2 | MobileNet-v2 convolutional neural network (depuis R2019a) |
modwt | Maximal overlap discrete wavelet transform |
modwtLayer | Maximal overlap discrete wavelet transform (MODWT) layer (depuis R2022b) |
mse | Half mean squared error (depuis R2019b) |
mse | Fonction de performance d’erreur quadratique moyenne normalisée |
multiplicationLayer | Multiplication layer (depuis R2020b) |
N
narnet | Nonlinear autoregressive neural network |
narxnet | Nonlinear autoregressive neural network with external input |
nasnetlarge | Pretrained NASNet-Large convolutional neural network (depuis R2019a) |
nasnetmobile | Pretrained NASNet-Mobile convolutional neural network (depuis R2019a) |
nctool | Open Neural Net Clustering app |
negdist | Negative distance weight function |
netinv | Inverse transfer function |
netprod | Product net input function |
netsum | Sum net input function |
network | Convert Autoencoder object into network object |
network | Créer un réseau de neurones peu profond personnalisé |
networkDataLayout | Deep learning network data layout for learnable parameter initialization (depuis R2022b) |
neuronPCA | Principal component analysis of neuron activations (depuis R2022b) |
newgrnn | Design generalized regression neural network |
newlind | Design linear layer |
newpnn | Design probabilistic neural network |
newrb | Design radial basis network |
newrbe | Design exact radial basis network |
next | Obtain next mini-batch of data from minibatchqueue (depuis R2020b) |
nftool | Ouvrir l’application Neural Net Fitting |
nncell2mat | Combine neural network cell data into matrix |
nncorr | Cross correlation between neural network time series |
nndata | Create neural network data |
nndata2gpu | Format neural data for efficient GPU training or simulation |
nndata2sim | Convert neural network data to Simulink time series |
nnsize | Number of neural data elements, samples, timesteps, and signals |
nntool | (Supprimé) Ouvrir Network/Data Manager |
nntraintool | (Removed) Neural network training tool |
noloop | Remove neural network open- and closed-loop feedback |
normc | Normaliser des colonnes de matrice |
normprod | Normalized dot product weight function |
normr | Normalize rows of matrix |
nprtool | Ouvrir l’application Neural Net Pattern Recognition |
ntstool | Ouvrir l’application Neural Net Time Series |
num2deriv | Numeric two-point network derivative function |
num5deriv | Numeric five-point stencil neural network derivative function |
numelements | Number of elements in neural network data |
numfinite | Number of finite values in neural network data |
numnan | Number of NaN values in neural network data |
numsamples | Number of samples in neural network data |
numsignals | Number of signals in neural network data |
numtimesteps | Number of time steps in neural network data |
O
occlusionSensitivity | Explain network predictions by occluding the inputs (depuis R2019b) |
onehotdecode | Decode probability vectors into class labels (depuis R2020b) |
onehotencode | Encode data labels into one-hot vectors (depuis R2020b) |
ONNXParameters | Parameters of imported ONNX network for deep learning (depuis R2020b) |
openl3 | OpenL3 neural network (depuis R2021a) |
openl3Embeddings | Extract OpenL3 feature embeddings (depuis R2022a) |
openl3Preprocess | Preprocess audio for OpenL3 feature extraction (depuis R2021a) |
openloop | Convert neural network closed-loop feedback to open loop |
P
padsequences | Pad or truncate sequence data to same length (depuis R2021a) |
partition | Partition minibatchqueue (depuis R2020b) |
partitionByIndex | Partition augmentedImageDatastore according to
indices |
patternnet | Générer un réseau de reconnaissance de formes |
perceptron | Classifieur binaire simple à couche unique |
perform | Calculate network performance |
pitchnn | Estimate pitch with deep learning neural network (depuis R2021a) |
pixelLabelDatastore | Datastore for pixel label data |
PlaceholderLayer | Layer replacing an unsupported Keras or ONNX layer |
plot | Plot neural network architecture |
plot | Plot receiver operating characteristic (ROC) curves and other performance curves (depuis R2022b) |
plotconfusion | Plot classification confusion matrix |
plotep | Plot weight-bias position on error surface |
ploterrcorr | Plot autocorrelation of error time series |
ploterrhist | Plot error histogram |
plotes | Plot error surface of single-input neuron |
plotfit | Tracer l'approximation d'une fonction |
plotinerrcorr | Plot input to error time-series cross-correlation |
plotpc | Plot classification line on perceptron vector plot |
plotperform | Tracer les performances d’un réseau |
plotpv | Tracer les vecteurs d’entrée/cibles d'un perceptron |
plotregression | Tracer une régression linéaire |
plotresponse | Plot dynamic network time series response |
plotroc | Tracer la fonction d’efficacité d’un récepteur |
plotsom | Plot self-organizing map |
plotsomhits | Tracer les neurones vainqueurs (Sample Hits) d'une carte auto-organisatrice |
plotsomnc | Plot self-organizing map neighbor connections |
plotsomnd | Plot self-organizing map neighbor distances |
plotsomplanes | Plot self-organizing map weight planes |
plotsompos | Plot self-organizing map weight positions |
plotsomtop | Plot self-organizing map topology |
plottrainstate | Tracer les valeurs d'un état de l’apprentissage |
plotv | Tracer des vecteurs sous forme de lignes depuis l’origine |
plotvec | Tracer des vecteurs avec différentes couleurs |
plotwb | Plot Hinton diagram of weight and bias values |
plotWeights | Plot a visualization of the weights for the encoder of an autoencoder |
pnormc | Pseudonormalize columns of matrix |
pointnetplusLayers | Create PointNet++ segmentation network (depuis R2021b) |
pointPillarsObjectDetector | PointPillars object detector (depuis R2021b) |
poslin | Positive linear transfer function |
predict | Predict responses using trained deep learning neural network |
predict | Compute deep learning network output for inference (depuis R2019b) |
predict | Compute deep learning network output for inference by using a TensorFlow Lite model (depuis R2022a) |
predict | Reconstruct the inputs using trained autoencoder |
predictAndUpdateState | Predict responses using a trained recurrent neural network and update the network state |
preparets | Prepare input and target time series data for network simulation or training |
processpca | Process columns of matrix with principal component analysis |
prune | Delete neural inputs, layers, and outputs with sizes of zero |
prunedata | Prune data for consistency with pruned network |
purelin | Fonction de transfert linéaire |
Q
quant | Discrétiser des valeurs comme multiples d’une quantité |
quantizationDetails | Display quantization details for a neural network (depuis R2022a) |
quantize | Quantize deep neural network (depuis R2022a) |
R
radbas | Fonction de transfert à base radiale |
radbasn | Normalized radial basis transfer function |
randnc | Normalized column weight initialization function |
randnr | Normalized row weight initialization function |
randomPatchExtractionDatastore | Datastore for extracting random 2-D or 3-D random patches from images or pixel label images |
rands | Fonction d’initialisation aléatoire symétrique des poids/biais |
randsmall | Small random weight/bias initialization function |
randtop | Random layer topology function |
rcnnObjectDetector | Detect objects using R-CNN deep learning detector |
read | Read data from augmentedImageDatastore |
readByIndex | Read data specified by index from
augmentedImageDatastore |
readWordEmbedding | Read word embedding from file |
recordMetrics | Record metric values in experiment results table and training plot (depuis R2021a) |
recordMetrics | Record metric values for custom training loops (depuis R2022b) |
regression | (Not recommended) Perform linear regression of shallow network outputs on targets |
regressionLayer | Couche de sortie de régression |
RegressionOutputLayer | Regression output layer |
relu | Apply rectified linear unit activation (depuis R2019b) |
reluLayer | Couche ReLU (Rectified Linear Unit) |
removeconstantrows | Process matrices by removing rows with constant values |
removedelay | Remove delay to neural network’s response |
removeLayers | Remove layers from layer graph or network |
removeParameter | Remove parameter from ONNXParameters object (depuis R2020b) |
removerows | Process matrices by removing rows with specified indices |
replaceLayer | Replace layer in layer graph or network |
reset | Reset minibatchqueue to start of data (depuis R2020b) |
resetState | Reset state parameters of neural network |
resnet101 | ResNet-101 convolutional neural network |
resnet18 | ResNet-18 convolutional neural network |
resnet3dLayers | Create 3-D residual network (depuis R2021b) |
resnet50 | Réseau de neurones à convolution ResNet-50 |
resnetLayers | Create 2-D residual network (depuis R2021b) |
revert | Change network weights and biases to previous initialization values |
rmspropupdate | Update parameters using root mean squared propagation (RMSProp) (depuis R2019b) |
roc | Receiver operating characteristic |
rocmetrics | Receiver operating characteristic (ROC) curve and performance metrics for binary and multiclass classifiers (depuis R2022b) |
S
sae | Sum absolute error performance function |
satlin | Saturating linear transfer function |
satlins | Symmetric saturating linear transfer function |
scalprod | Scalar product weight function |
segnetLayers | Create SegNet layers for semantic segmentation |
selforgmap | Self-organizing map |
separatewb | Separate biases and weight values from weight/bias vector |
seq2con | Convert sequential vectors to concurrent vectors |
sequenceFoldingLayer | Sequence folding layer (depuis R2019a) |
sequenceInputLayer | Sequence input layer |
sequenceUnfoldingLayer | Sequence unfolding layer (depuis R2019a) |
SeriesNetwork | Series network for deep learning |
setelements | Set neural network data elements |
setL2Factor | Set L2 regularization factor of layer learnable parameter |
setLearnRateFactor | Set learn rate factor of layer learnable parameter |
setsamples | Set neural network data samples |
setsignals | Set neural network data signals |
setsiminit | Set neural network Simulink block initial conditions |
settimesteps | Set neural network data timesteps |
setwb | Set all network weight and bias values with single vector |
sgdmupdate | Update parameters using stochastic gradient descent with momentum (SGDM) (depuis R2019b) |
shuffle | Shuffle data in augmentedImageDatastore |
shuffle | Shuffle data in minibatchqueue (depuis R2020b) |
shufflenet | Pretrained ShuffleNet convolutional neural network (depuis R2019a) |
sigmoid | Appliquer l’activation sigmoïde (depuis R2019b) |
sigmoidLayer | Sigmoid layer (depuis R2020b) |
signalDatastore | Datastore for collection of signals (depuis R2020a) |
signalFrequencyFeatureExtractor | Streamline signal frequency feature extraction (depuis R2021b) |
signalLabelDefinition | Create signal label definition |
signalMask | Modify and convert signal masks and extract signal regions of interest (depuis R2020b) |
signalTimeFeatureExtractor | Streamline signal time feature extraction (depuis R2021a) |
sim | Simulate neural network |
sim2nndata | Convert Simulink time series to neural network data |
softmax | Apply softmax activation to channel dimension (depuis R2019b) |
softmax | Softmax transfer function |
softmaxLayer | Couche softmax |
sortClasses | Sort classes of confusion matrix chart |
splitlabels | Find indices to split labels according to specified proportions (depuis R2021a) |
squeezenet | SqueezeNet convolutional neural network |
squeezesegv2Layers | Create SqueezeSegV2 segmentation network for organized lidar point cloud (depuis R2020b) |
srchbac | 1-D minimization using backtracking |
srchbre | 1-D interval location using Brent’s method |
srchcha | 1-D minimization using Charalambous' method |
srchgol | 1-D minimization using golden section search |
srchhyb | 1-D minimization using a hybrid bisection-cubic search |
ssdObjectDetector | Detect objects using SSD deep learning detector (depuis R2020a) |
sse | Sum squared error performance function |
stack | Stack encoders from several autoencoders together |
staticderiv | Static derivative function |
stft | Short-time Fourier transform (depuis R2019a) |
stftLayer | Short-time Fourier transform layer (depuis R2021b) |
stripdims | Remove dlarray data format (depuis R2019b) |
sumabs | Somme des éléments absolus d’une ou plusieurs matrices |
summary | Print network summary (depuis R2022b) |
sumsqr | Somme d’éléments au carré d’une ou plusieurs matrices |
swishLayer | Swish layer (depuis R2021a) |
T
tanhLayer | Hyperbolic tangent (tanh) layer (depuis R2019a) |
tansig | Fonction de transfert sigmoïde tangente hyperbolique |
tapdelay | Shift neural network time series data for tap delay |
taylorPrunableNetwork | Network that can be pruned by using first-order Taylor approximation (depuis R2022a) |
TFLiteModel | TensorFlow Lite model (depuis R2022a) |
timedelaynet | Time delay neural network |
tonndata | Convert data to standard neural network cell array form |
train | Entraîner un réseau de neurones peu profond |
trainAutoencoder | Train an autoencoder |
trainb | Batch training with weight and bias learning rules |
trainbfg | BFGS quasi-Newton backpropagation |
trainbr | Bayesian regularization backpropagation |
trainbu | Batch unsupervised weight/bias training |
trainc | Cyclical order weight/bias training |
traincgb | Conjugate gradient backpropagation with Powell-Beale restarts |
traincgf | Conjugate gradient backpropagation with Fletcher-Reeves updates |
traincgp | Conjugate gradient backpropagation with Polak-Ribiére updates |
traingd | Gradient descent backpropagation |
traingda | Gradient descent with adaptive learning rate backpropagation |
traingdm | Gradient descent with momentum backpropagation |
traingdx | Gradient descent with momentum and adaptive learning rate backpropagation |
trainingOptions | Options d’un réseau de neurones d’apprentissage pour le Deep Learning |
TrainingOptionsADAM | Training options for Adam optimizer |
TrainingOptionsRMSProp | Training options for RMSProp optimizer |
TrainingOptionsSGDM | Training options for stochastic gradient descent with momentum |
trainingProgressMonitor | Monitor and plot training progress for deep learning custom training loops (depuis R2022b) |
trainlm | Levenberg-Marquardt backpropagation |
trainNetwork | Train neural network |
trainoss | One-step secant backpropagation |
trainPointPillarsObjectDetector | Train PointPillars object detector (depuis R2021b) |
trainr | Random order incremental training with learning functions |
trainrp | Resilient backpropagation |
trainru | Unsupervised random order weight/bias training |
trains | Sequential order incremental training with learning functions |
trainscg | Scaled conjugate gradient backpropagation |
trainSoftmaxLayer | Train a softmax layer for classification |
trainWordEmbedding | Train word embedding |
transposedConv1dLayer | Transposed 1-D convolution layer (depuis R2022a) |
transposedConv2dLayer | Transposed 2-D convolution layer |
transposedConv3dLayer | Transposed 3-D convolution layer (depuis R2019a) |
TransposedConvolution1DLayer | Transposed 1-D convolution layer (depuis R2022a) |
TransposedConvolution2DLayer | Transposed 2-D convolution layer |
TransposedConvolution3dLayer | Transposed 3-D convolution layer (depuis R2019a) |
tribas | Triangular basis transfer function |
tritop | Triangle layer topology function |
U
unconfigure | Unconfigure network inputs and outputs |
unet3dLayers | Create 3-D U-Net layers for semantic segmentation of volumetric images (depuis R2019b) |
unetLayers | Create U-Net layers for semantic segmentation |
unfreezeParameters | Convert nonlearnable network parameters in ONNXParameters to
learnable (depuis R2020b) |
updateInfo | Update information columns in experiment results table (depuis R2021a) |
updateInfo | Update information values for custom training loops (depuis R2022b) |
updatePrunables | Remove filters from prunable layers based on importance scores (depuis R2022a) |
updateScore | Compute and accumulate Taylor-based importance scores for pruning (depuis R2022a) |
V
validate | Quantize and validate a deep neural network (depuis R2020a) |
vec2ind | Convert vectors to indices |
vec2word | Map embedding vector to word |
verifyNetworkRobustness | Verify adversarial robustness of deep learning network (depuis R2022b) |
vgg16 | VGG-16 convolutional neural network |
vgg19 | VGG-19 convolutional neural network |
vggish | VGGish neural network (depuis R2020b) |
vggishEmbeddings | Extract VGGish feature embeddings (depuis R2022a) |
vggishPreprocess | Preprocess audio for VGGish feature extraction (depuis R2021a) |
view | Afficher un réseau de neurones peu profond |
view | View autoencoder |
W
waveletScattering | Wavelet time scattering |
word2ind | Map word to encoding index |
word2vec | Map word to embedding vector |
wordEmbedding | Word embedding model to map words to vectors and back |
wordEmbeddingLayer | Word embedding layer for deep learning neural network |
wordEncoding | Word encoding model to map words to indices and back |
writeWordEmbedding | Write word embedding file |
X
xception | Xception convolutional neural network (depuis R2019a) |
Y
yamnet | YAMNet neural network (depuis R2020b) |
yamnetGraph | Graph of YAMNet AudioSet ontology (depuis R2020b) |
yamnetPreprocess | Preprocess audio for YAMNet classification (depuis R2021a) |
yolov2ObjectDetector | Detect objects using YOLO v2 object detector (depuis R2019a) |
yolov3ObjectDetector | Detect objects using YOLO v3 object detector (depuis R2021a) |
yolov4ObjectDetector | Detect objects using YOLO v4 object detector (depuis R2022a) |