TensorFlow API is built by google and mainly deals with neural network but support other APIs as well such as scikit learn to one liner ML models.

### TF 1 vs 2

- Session and placeholder is removed by tf.function

### TF estimator

It is library which hold all high level ML algorithm for ex. You can create your own estimator by extending **tf.estimator.Estimator** or just use predefined estimator, such as
**tf.estimator.DNNClassifier** for deep models that perform multi-class classification.
**tf.estimator.DNNLinearCombinedClassifier** for wide & deep models.
**tf.estimator.LinearClassifier** for classifiers based on linear models.

### TF Keras

Keras is a high level set of estimators, ready to use. It requires

- Model - tf.keras.Sequential()
- Type of layers, layers.Dense(64, activation=’relu’), it takes
- activation
- kernel_regularizer, tf.keras.regularizers.l1(0.01)
- bias_regularizer

- tf.keras.Model.compile , it takes
- optimizer
- loss
- metrics

- Fit data - model.fit()
- epochs
- batch_size
- validation_data

- Evaluate - model.evaluate(dataset)
- Predict - model.predict(data, batch_size=32)

### Declarative approach, take tensor, return tensor.

### Callbacks in running

tf.keras.callbacks.ModelCheckpoint: Save checkpoints of your model at regular intervals. tf.keras.callbacks.LearningRateScheduler: Dynamically change the learning rate. tf.keras.callbacks.EarlyStopping: Interrupt training when validation performance has stopped improving. tf.keras.callbacks.TensorBoard: Monitor the model’s behavior using TensorBoard.

### Save & Reload

### Data preprocess.

**TF Logging** - It has four logging options,DEBUG,INFO ,WARNING, ERROR. tf.logging.set_verbosity(tf.logging.ERROR)

**Tensors** : One number, vector, matrix etc are are form of tensor. Increase in dimension results are stored in tensors. How would you store 10 matrix. You will think array of matrix, and that variable is called tensor.

Tensor flow formally gives two packages, one to deal with Graph and another to run computation on graph (sessions).

**Computational Graph** - Graph is same old neural network graph each node in graph has some purpose(variable,computation,placeholder), input and output. It’s a data strucutre well defined in TF.

Advantage of using graph- portability (Export and share), Easy to understand, parallelism,

Following are the component of Computational Graph. Variable, Placeholders, contants, operation, graph, session.

Compared to numpy array they don’t allocate memory in starting. For ex. a = tf.zeros(int(le12),int(le12)) will just form a shape not the actual allocation of memory. It will only allocate memory when it is executed.

### How the Graph component looks in Code.

Here is very nice explanation from one of the blog. https://medium.com/@d3lm/understand-tensorflow-by-mimicking-its-api-from-scratch-faa55787170d

### Important Function used in Andrews NG Course.

- tf.nn.conv2d(X,W, strides = [1,s,s,1], padding = ‘SAME’) – convolution layer
- tf.nn.max_pool(A, ksize = [1,f,f,1], strides = [1,s,s,1], padding = ‘SAME’) - max pool layer
- tf.nn.relu(Z) - element wise relu
- tf.contrib.layers.flatten(P) – flattens the multidimensional into desire dimension
- tf.contrib.layers.fully_connected(F, num_outputs) – Create fully connected layer from F to num_outputs
- tf.nn.softmax_cross_entropy_with_logits(logits = Z, labels = Y) –Calculate cost.
- tf.reduce_mean – Calculate mean of all example cost.
- Valid vs same pad, same brings add extra padding in case if needed, valid doesn’t consider edge to apply filter. (https://stackoverflow.com/questions/37674306/what-is-the-difference-between-same-and-valid-padding-in-tf-nn-max-pool-of-t)