Each month we share updates about our progress on the platform and plans for what’s next.
The December 2018 sprint includes a faster and more comprehensive deployment function, a new set of available deep neural network snippets and randomized compiler options. Let’s dive in and see what other advancements have been made.
Search and filter capability is now available for project lists. When working with several people on numerous projects, finding the relevant project can be time-consuming. Now you can search for your own projects or other team members’ projects without scrolling through long lists! Search by creator or project name.
We’ve created a handy list of well-known and well-performing networks to help you get started with more complex deep neural network architectures, without actually having to build the models yourself. The new snippets on the platform include popular and well-known implementations of ResNetv2, DenseNet, Inception, VGG, Tiramisu and U-Net. These can be found under Modeling > Blocks > Snippets. If you’re unsure which snippet to use for your problem, check out the Snippet tooltip to get information on what types of problems you can approach, what type of input data to use and for additional background. This information appears by hovering over a snippet block. Once you know which snippet to use, just add it to the modeling canvas, define the training data and your target and start experimenting! Check out our blog post, Snippets – your gateway to deep neural network architectures, for more information on which snippets to use for different problem types.
Parameters, blocks and settings panels are now collapsible, making work with large datasets much easier. Big datasets often include many features, and deep models can span out with many layers, thus it’s helpful to have more space for exploring and building your models. We’ve also added a toolbar above the working area for each page to make sure you always find the necessary action buttons in the same place.
We've rearranged how the blocks, settings and dataset information is handled on the Modeling page. We’ve moved the most important actions side by side on the right-hand panel to make sure the complete user workflow is supported.
Improved compiler options. In the runtime settings, we’ve randomized the data access seed for a true random number, and this number can be anything in the range of an integer. The data access seed is used for controlling in which order the data is accessed during training. Randomization of the seed means experiments will be independent of each other since data access will be different for each experiment. This is desired behavior for comparing performance between independent models and runs.
/ Fast deployment
Technical documentation, API specifications and help are now available on the Deployment page. This means that the JSON script can be downloaded and provided to, e.g., a developer outside your team or organization. This makes it possible for them to understand how to call the deployment API and how the request-response parameters should look - making it easier to collaborate across teams and organizations.
These were the new features deployed during the month of December, and the last round of updates of 2018. We have an exciting and ambitious product roadmap for 2019 and look forward to sharing many more new features and improvements with you in the year to come!