Modeling view - with and without standardization on tabular data / Example workflow
Step 1: Create experiments with dataset version NoStdTabular/TargetStd
Click New experiment, name it NoStdTabular/TargetStd.
Use the blocks Input, Dense, Batch normalization, Dense and Target consecutively as the neural architecture.
Step 2: Configure the dataset settings
Set the Dataset version to NoStdTabular/TargetStd.
Step 3: Configure the blocks settings in the CNN snippet
Select the Input block, set the Input feature to Tabular_path. Select the Target block, set the Target feature to Target_medianhouseValue, set the Loss to Mean square error. Select the last Dense block, set the number of Nodes to 1 and set Linear as the Activation function.
Step 4: Config the settings for running the model
Navigate to the Settings tab in the Inspector, set the Batch size to 128. Set the Epochs to 50. Set the Data access seed to 2 for all of the experiments.
Step 5: Click Run
Step 6: Duplicate the experiment
While it’s running, duplicate this experiment (without weights) with the default name NoStdTabular/TargetStd 2. Duplicate NoStdTabular/TargetStd 2, resulted in NoStdTabular/TargetStd 3, let them run with the same settings. Remember to change the Data access seed to 2. The purpose of running several experiments with the same settings is to get the average loss value to compare with the averaged loss value generated by different versions of dataset (with and without standardization). Keep in mind that statistical test requires more experiments in order to draw a conclusion.
Step 7: Create experiments with dataset version StdTabular/TargetStd
Duplicate experiment NoStdTabular/TargetStd, so that the same settings are inherited and only change the Dataset version to StdTabular/TargetStd in the modeling view.
Following Step 6, create 3 experiments with the dataset version StdTabular/TargetStd.
The Modeling view enables experiment source tracking. Click the experiment link at the bottom of the Modeling canvas, to see the source experiment. The experiment creator and the created date are shown on the bottom right.
It’s recommended to rename each experiment with meaningful keywords so that it helps to monitor and compare the experiments with different settings. Next, compare the results on evaluation view.