Release notes

Release 19.13

Release date: May 26, 2018
Major Changes in this Release:
New Features and Improvements:
   - Added a lot of new Python bindings.  You can now use these things from Python:
      - gaussian_blur(), label_connected_blobs(), randomly_color_image(), jet(),
        skeleton(), find_line_endpoints(), get_rect(), shrink_rect(), grow_rect(),
        image_gradients, label_connected_blobs_watershed(), convert_image(),
        convert_image_scaled(), dpoint, centered_rect(), centered_rects(), length(),
        as_grayscale(), pyramid_down, find_bright_keypoints(), find_bright_lines(),
        find_dark_lines(), find_dark_keypoints(), suppress_non_maximum_edges(),
        find_peaks(), hysteresis_threshold(), sobel_edge_detector(), equalize_histogram(),
        resize_image(), hough_transform, remove_incoherent_edge_pixels(),
        normalize_image_gradients(), line, signed_distance_to_line(), distance_to_line(),
        reverse(), intersect(), count_points_on_side_of_line(),
        count_points_between_lines(), dot(), normalize(), point_transform_projective,
        find_projective_transform(), inv(), transform_image(), angle_between_lines(),
        extract_image_4points(), load_grayscale_image(), min_barrier_distance(). 
      - Added a .add_overlay_circle() to dlib.image_window.  Also made .add_overlay()
        take lines.
      - Added the *_corner() routines to rectangle and drectangle and made these 
        objects constructable from instances of each other.
   - Made the Python extension module automatically enable AVX instructions if the host
     machine supports them.  So you never need to say --yes USE_AVX_INSTRUCTIONS anymore
     when installing dlib.

   - New C++ routines:
      - Added an image_window::add_overlay() overload for line object.
      - Added angle_between_lines()
      - Added extract_image_4points()
      - Added is_convex_quadrilateral(), find_convex_quadrilateral(), and no_convex_quadrilateral.
      - Added python_list_to_array()
      - Added min_barrier_distance() 

Non-Backwards Compatible Changes:

Bug fixes:
   - Fixed numpy_image and pybind11 crashing python sometimes when certain types of
     conversions are attempted.
   - Fixed some python functions not taking as wide a range of image types as they did in
     previous dlib versions.

Release 19.12

Release date: May 19, 2018
Major Changes in this Release:
New Features and Improvements:
   - Added Python interface to threshold_image() and partition_pixels().

Non-Backwards Compatible Changes:
   - In the Python API, renamed dlib.save_rgb_image() to dlib.save_image().

Bug fixes:
   - Dlib 19.11 had a bug that caused the Python interface to reject grayscale images.
     This has been fixed.

Release 19.11

Release date: May 17, 2018
Major Changes in this Release:
New Features and Improvements:
   - Deep Learning
      - Added resize_to layer.
      - Made loss_multiclass_log_per_pixel use CUDA. This greatly accelerates 
        training with this loss layer.
   - Image Processing
      - Added normalize_image_gradients() and remove_incoherent_edge_pixels().
      - Added neighbors_24 (for use with label_connected_blobs())
      - Added partition_pixels() and made threshold_image() use it to find the
        default threshold if none is given. Also depreciated
        auto_threshold_image() since using partition_pixels() to pick the
        threshold is superior.
      - Added overload of hysteresis_threshold() that uses partition_pixels()
        to select thresholds.
      - Added encode_8_pixel_neighbors() and find_line_endpoints().
      - Added image_gradients, a tool for computing multi-scale first and
        second order image gradients. 
      - Added find_bright_lines(), find_dark_lines(), find_bright_keypoints(),
        and find_dark_keypoints(). 
      - Added label_connected_blobs_watershed().
      - Added find_peaks().
      - Added find_pixels_voting_for_lines(), find_strong_hough_points(),
        and perform_generic_hough_transform() to the hough_transform.
   - Improved the stopping condition of the solve_qp_box_constrained() and
     solve_qp_box_constrained_blockdiag() quadratic program solvers.
   - Made find_min_single_variable() more numerically stable.
   - Make visual studio build with all cores when building dlib.
   - Added steal_memory() to the matrix.
   - Improved the workflow for adding part annotations in imglab.  You can now
     use shift+click to rapidly add parts to an object.  Each part will
     automatically receive an integer label.
   - Added .begin() and .end() to array2d.
   - Added the line class and some related utility functions.
   - Added centered_rects().
   - Added numpy_image, which is a simple type safe interface to numpy arrays.
     It is also a fully functional dlib image class, making interfacing dlib's
     image processing to Python much cleaner.  

Non-Backwards Compatible Changes:
   - All CUDA code in dlib was moved to dlib/cuda.

Bug fixes:
   - Replaced all the old Python C APIs interfacing to numpy with the new
     numpy_image.  This fixed some reference counting errors as a result.
   - Fixed load_bmp() not loading certain types of BMP images. 
   - Fixed compile time bugs when trying to use CBLAS in some unusual environments.
   - Fixed bug in global_function_search's constructor that takes initial function
     evaluations. It wasn't assigning these values into the entire state of the
     solver.
   - Fixed compile time error in the matlab bindings.  Also, fixed cell arrays
     of complex types not binding correctly.
   - Renamed BOOST_JOIN to DLIB_BOOST_JOIN to prevent name clashes when working
     with boost.
   - Fixed bug in hysteresis_threshold() that could cause incorrect output.
   - The last release of dlib added more ODR violation checks.  However, these
     could erroneously trigger when using Visual Studio and CUDA in certain
     workflows.  This has been fixed.
   - Fixed matrix objects not working correctly when sized greater than 4GB in
     Visual Studio.

Release 19.10

Release date: Mar 19, 2018
Major Changes in this Release:
New Features and Improvements:
   - Deep Learning:
      - Added scale_ layer, allowing implementation of squeeze-and-excitation networks.
      - Added loss_multimulticlass_log: used for learning a collection of multi-class classifiers.
   - Added a random forest regression tool. See random_forest_regression_trainer.
   - Added make_bounding_box_regression_training_data()
   - Added isotonic_regression
   - Added momentum_filter, rect_filter, find_optimal_momentum_filter(), and
     find_optimal_rect_filter().
   - Added binomial_random_vars_are_different() and event_correlation().
   - Added xcorr_fft(), a routine for efficiently performing large cross-correlations using the FFT.
   - Added the ramdump type decorator for invoking faster serialization routines.
   - Added check_serialized_version()
   - Added max_scoring_element() and min_scoring_element()
   - Made orthogonalize() faster.
   - Updates to the Python API:
      - Added interface to the global_function_search object.  This is a more general
        interface to the solver used by find_max_global().
      - Added support for variadic Python functions in find_max_global().
      - Added rect_filter and find_optimal_rect_filter().
      - Added make_bounding_box_regression_training_data()
      - Added the image_dataset_metadata routines for parsing XML datasets.
      - Added rvm_trainer
      - Added probability_that_sequence_is_increasing() 
      - Added dlib.__time_compiled__ field
      - Added num_threads to shape_predictor_training_options.
      - Added CUDA controlling routines such as set_device() and 
        set_dnn_prefer_smallest_algorithms().

Non-Backwards Compatible Changes:
   - Changed CMake so that there is only the dlib target and it isn't forced to
     be static.  Instead, the build type will toggle based on the state of CMake's
     BUILD_SHARED_LIBS variable.  So there is no longer a dlib_shared target.
   - Changed the integer types used to represent sizes from 32bits to 64bits in numerous
     places, such as in the tensor object.  This should be a backwards compatible change
     for nearly all client code.

Bug fixes:
   - Fixed memory leak in java swig array binding tool.
   - Fixed windows include order problem in all/source.cpp file.
   - Fixed cont_ layers not printing the correct num_filters parameter when they were 
     printed to std::cout or to XML.
   - Fixed some code not handling OBJECT_PART_NOT_PRESENT values correctly.
   - Fixed fft_inplace() not compiling for compile time sized matrices.
   - The shape_predictor_trainer could have very bad runtime for some really
     bad parameter settings.  This has been fixed and also warning messages about
     really bad training data or parameters have been added.
   - Fixed the decayed running stats objects so they use unbiased estimators.


Release 19.9

Release date: Jan 22, 2018
Major Changes in this Release:
New Features and Improvements:
   - Switched the Python API from Boost.Python to pybind11.  This means Python
     users don't need to install Boost anymore, making building dlib's Python API
     much easier.
   - Made the sparse version of svd_fast() use multiple CPU cores.
   - Changed the behavior of imglab's --flip option.  It will now attempt to
     adjust any object part labels so that the flipped dataset has the same
     average part layout as the source dataset. There is also a new --flip-basic 
     option that behaves like the old --flip. However, most people flipping a
     dataset with part annotations will want to use --flip.  For more details
     see: http://blog.dlib.net/2018/01/correctly-mirroring-datasets.html

Non-Backwards Compatible Changes:
   - Removed std::auto_ptr from dlib's old (and depreciated) smart pointers. 

Bug fixes:
   - Fixed global_optimization.py not working in Python 3.


Release 19.8

Release date: Dec 19, 2017
Major Changes in this Release:
New Features and Improvements:
   - Added a global optimizer, find_max_global(), which is suitable for
     optimizing expensive functions with many local optima.  For example, you
     can use it for hyperparameter optimization.  See model_selection_ex.cpp
     for an example.
   - Updates to the deep learning tooling:
      - Added semantic segmentation examples: dnn_semantic_segmentation_ex.cpp
        and dnn_semantic_segmentation_train_ex.cpp
      - New layers: loss_ranking, loss_epsilon_insensitive, softmax_all, and loss_dot.
      - Made log loss layers more numerically stable.
      - Upgraded the con layer so you can set the number of rows or columns to
        0 in the layer specification. Doing this means "make the filter cover
        the whole input image dimension".  This provides an easy way to make a
        filter sized so it will have one output along that dimension,
        effectively making it like a fully connected layer operating on a row
        or column.
      - Added support for non-scale-invariant MMOD.
      - Added an optional parameter to dnn_trainer::get_net() that allows you
        to call the function without forcing a state flush to disk.
      - Sometimes the loss_mmod layer could experience excessively long runtime
        during early training iterations.  This has been optimized and is now
        much faster.
      - Optimized the tensor's management of GPU memory.  It now uses less memory
        in some cases.  It will also not perform a reallocation if resized to a
        smaller size.  Instead, tensors now behave like std::vector in that
        they just change their nominal size but keep the same memory, only
        reallocating if they are resized to something larger than their
        underlying memory block. This change makes some uses of dlib faster, in
        particular, running networks on a large set of images of differing
        sizes will now run faster since there won't be any GPU reallocations,
        which are notoriously slow.
      - Upgraded the input layer so you can give
        input<std::array<matrix<T>,K>> types as input. Doing
        this will create input tensors with K channels.
   - Added disjoint_subsets_sized
   - Added Python APIs: get_face_chips(), count_steps_without_decrease(),
     count_steps_without_decrease_robust(), and jitter_image().
   - Various improvements to CMake scripts: e.g. improved warning and error
     messages, added USE_NEON_INSTRUCTIONS option.
   - chol() will use a banded Cholesky algorithm for banded matrices, making it
     much faster in these cases.
   - Changed the timing code to use the C++11 high resolution clock and
     atomics. This makes the timing code a lot more precise.

Non-Backwards Compatible Changes:
   - Changed the random_cropper's set_min_object_size() routine to take min box
     dimensions in the same format as the mmod_options object (i.e. two lengths
     measured in pixels). This should make defining random_cropping strategies
     that are consistent with MMOD settings more straightforward since you can
     simply take the mmod_options settings and give them to the random_cropper
     and it will do the right thing.
   - Changed the mean squared loss layers to return a loss that's the MSE, not
     0.5*MSE. The only thing this effects is the logging messages that print
     during training, which were confusing since the reported loss was half the
     size you might naively expect.
   - Changed the outputs of test_regression_function() and cross_validate_regression_trainer().
     These functions now output 4D rather than 2D vectors.  The new output is:
     mean squared error, correlation, mean absolute error, and standard
     deviation of absolute error.  I also made test_regression_function() take
     a non-const reference to the regression function so that DNN objects can
     be tested.
   - Fixed shape_predictor_trainer padding so it behaves as it used to. In
     dlib 19.7 the padding code was changed and accidentally doubled the size
     of the applied padding in some cases. It's not a huge deal either way, but
     this change reverts back to the previous behavior.

Bug fixes:
   - Fixed toMat() not compiling in some cases.
   - Significantly reduced the compile time of the DNN example programs in
     visual studio.
   - Fixed a few image processing functions that weren't using the generic
     image interface.
   - Fixed a bug in the random_cropper where it might crash due to division by
     0 if small images were given as input.
   - Fixed a bug in how the mmod_options automatically determines detection
     window sizes. It would pick a bad size in some cases.
   - Fixed load_image_dataset()'s skip_empty_images() option. It wasn't
     skipping images that only have ignore boxes when you load into mmod_rect
     objects.
   - Fixed a bug where chinese_whispers(), when called from python, would
     sometimes return a labels array that didn't include labels for all the
     inputs.
   - Fixed a bug in dlib's MS Windows GUI code that was introduced a little
     while back when we switched everything to std::shared_ptr.  This change
     fixes a bug where the program crashes or hangs sometimes during program
     shutdown.
   - Fixed error in TIME_THIS() introduced in dlib 19.7. It was printing
     seconds when it said minutes in the output.
   - Adding missing implementation of tabbed_display::selected_tab.
   - Changed the windows signaler and mutex code to use the C++11 thread
     library instead of the old win32 functions. I did this to work around how
     windows unloads dlls. In particular, during dll unload windows will kill
     all threads, THEN it will destruct global objects. So this can lead to
     problems when a global object that owns threads tries to tell them to
     shutdown, since the threads have already vanished.  The new code mitigates
     some of these problems, in particular, there were some cases where
     unloading dlib's python extension would deadlock.  This should now be
     fixed.
   - Fixed compile time errors when either of these macros were enabled:
     DLIB_STACK_TRACE, DLIB_ISO_CPP_ONLY.


Release 19.7

Release date: Sep 17, 2017
Major Changes in this Release:
New Features and Improvements:
   - Deep Learning:
      - The CNN+MMOD detector is now a multi-class detector.  In particular,
        the mmod_rect object now has a string label field which you can use to
        label objects, and the loss_mmod_ layer will learn to label objects with
        those labels.  For an example, see: https://www.youtube.com/watch?v=OHbJ7HhbG74
      - CNN+MMOD detectors are now 2.5x faster.  For instance, this example program
        http://dlib.net/dnn_mmod_find_cars_ex.cpp.html now runs at 98fps instead
        of 39fps.  
   - Added a 5 point face landmarking model that is over 10x smaller than the
     68 point model, runs faster, and works with both HOG and CNN generated
     face detections.  It is now the recommended landmarking model to use for 
     face alignment.  render_face_detections() and get_face_chip_details() have been 
     updated to work with both 5 and 68 point models, so the new 5 point model is
     a drop in replacement for the 68 point model.
   - The imglab tool is slightly improved.  It will display box labels with
     higher relative contrast.  You can also now press END or i to ignore boxes
     in imglab. This is useful because it's a much less stressing hand motion
     to hit END that i in most cases.
   - Added overloads of sub_image() that take raw pointers so you can make
     sub_images of anything. 
   - Changed TIME_THIS() to use std::chrono::high_resolution_clock, so now it's
     much higher precision.
   - Exposed Chinese whispers clustering to Python, added face clustering example.

Non-Backwards Compatible Changes:

Bug fixes:
   - Fixed an error in input_rgb_image_pyramid::image_contained_point(). The
     function might erroneously indicate that a point wasn't inside the original
     image when really it was, causing spurious error messages.
   - mmod_options would pick bad window sizes in some corner cases. This has been fixed.
   - Fixed a bug in the extract layer that trigged when a tensor with a
     different number of samples than the tensor used to initialize the network
     was passed through the layer.
   - The loss_per_missed_target parameter of the loss_mmod_ wasn't being used
     exactly right when boxes were auto-ignored.  There weren't any practical
     user facing problems due to this, but it has nevertheless been fixed. 


Release 19.6

Release date: Aug 28, 2017
Major Changes in this Release:
New Features and Improvements:

Non-Backwards Compatible Changes:

Bug fixes:
   - Fix build error in Visual Studio when CUDA is enabled. 


Release 19.5

Release date: Aug 27, 2017
Major Changes in this Release:
New Features and Improvements:
   - Deep Learning
      - Added a python wrapper for using the CNN face detector.
      - Added support for cuDNN v6 and v7.
      - Added a simple tool to convert dlib model files to caffe models.  
        See the tools/convert_dlib_nets_to_caffe folder for details.
      - New DNN layers
         - loss_multiclass_log_per_pixel_
         - loss_multiclass_log_per_pixel_weighted_
         - loss_mean_squared_per_pixel_
         - cont_       (transpose convolution, sometimes called "deconvolution")
         - mult_prev_  (like add_prev_ but multiplies instead of adds)
         - extract_    (sort of like caffe's slice layer)
         - upsample_   (upsamples a tensor using bilinear interpolation)
      - Object Detection
         - Upgraded loss_mmod_ to support objects of varying aspect ratio. This
           changes the API for the mmod_options struct slightly.
         - Relaxed the default non-max suppression parameters used by the
           mmod_options object so that users of the deep learning MMOD tool don't
           get spurious errors about impossibly labeled objects during training.
         - Added missing input validation to loss_mmod_.  Specifically, the loss
           layer now checks if the user is giving truth boxes that can't be detected
           because the non-max suppression settings would prevent them from being
           output at the same time. If this happens then we print a warning message
           and set one of the offending boxes to "ignore".  I also changed all
           the input validation errors to warning messages with auto conversion
           to ignore boxes rather than exceptions.
         - Changed the random_cropper's interface so that instead of talking in
           terms of min and max object height, it's now min and max object size.
           This way, if you have objects that are short and wide (i.e. objects where
           the relevant dimension is width rather than height) you will get sensible
           behavior out of the random cropper.
         - Added options to input_rgb_image_pyramid that let the user set
           create_tiled_pyramid()'s padding parameters. Also changed the default
           outer border padding from 0 to 11. This effects even previously trained
           models. So any model that doesn't explicitly set the outer patting to
           something else will have a padding of 11. This should be a more
           reasonable value for most networks.
         - Added process() and process_batch() to add_loss_layer. These routines
           let you easily pass arguments to any optional parameters of a loss
           layer's to_tensor() routine. For instance, it makes it more convenient to
           set loss_mmod_'s adjust_threshold parameter.
      - Added visit_layers_until_tag()
      - Improved how dnn_trainer synchronizes its state to disk.  It now uses
        two files and alternates between them.  This should be more robust in
        the face of random hardware failure during synchronization than the
        previous synchronization method.
      - Made it so you can set the number of output filters for con_ layers at runtime.
      - The way cuDNN work buffers are managed has been improved, leading to
        less GPU RAM usage.  Therefore, users should not need to call
        set_dnn_prefer_smallest_algorithms() anymore.
      - Added operator<< for random_cropper and dnn_trainer to allow 
        easy logging of training parameters.
      - Made concat_ layer a lot faster.
      - Made the dnn_trainer not forget all the previous loss values it knows
        about when it determines that there have been a lot of steps without
        progress and shrinks the learning rate. Instead, it removes only a
        small amount of the oldest values.   The problem with the old way of
        removing all the loss values in the history was that if you set the
        steps without progress threshold to a really high number you would
        often observe that the last few learning rate values were obviously not
        making progress, however, since all the previous loss values were
        forgotten the trainer needed to fully populate its loss history from
        scratch before it would figure this out.  This new style makes the
        trainer not waste time running this excessive optimization of obviously
        useless mini-batches.  I also changed the default
        get_test_iterations_without_progress_threshold() from 200 to 500.  Now
        that we have a better history management of loss values in the trainer
        it's much more sensible to have a larger value here.
   - Dlib's simd classes will now use ARM NEON instructions.  This makes the
     HOG based object detector faster on mobile devices running ARM processors.
   - Added last_modified() method to dlib::file.  Also, added
     select_oldest_file() and select_newest_file().
   - Added solve_qp_box_constrained_blockdiag()
   - Added an overload of mat() that takes a row stride value.
   - Added cmake scripts and some related tooling that makes it easy to call
     C++ code from java.  See dlib/java/ folder.  
   - MATLAB MEX wrapper API
      - Made the mex wrapper deal with cell arrays that have null elements.
      - Made ctrl+c detection in a mex file work more reliably in newer versions of matlab.
   - Added set_rect_area()
   - Gave test_object_detection_function() an option to set how ignore box
     overlap is tested.
   - Added serialization support for the running_stats_decayed object.
   - Additions to imglab
      - Added --sort and also the ability to propagate boxes from one image to
        the next using dlib::correlation_tracker.
      - Made it so you can remove images by pressing alt+d. 
      - Made is so pressing e in imglab toggles between views of the image
        where the histogram is equalized or unmodified. This way, if you are
        looking at particularly dark or badly contrasted images you can toggle
        this mode and maybe get a better view of what you are labeling.
   - Made the attribute_list of the xml parser a little more friendly by
     allowing you to ask for attributes that don't exist and get a defined
     behavior (an exception being thrown) rather than it being a contract
     violation.

Non-Backwards Compatible Changes:
   - DNN solver objects are now required to declare operator<<.
   - Broke backwards compatibility with previous dnn_trainer serialization
     format.  The network serialization format has not changed however.  So old
     model files will still load properly.
   - Changed random_cropper interface.
   - Changed the XML format output by net_to_xml(). Specifically, the XML tag
     for affine layers was changed to use the same conventions as other layers
     that support convolutional vs fully connected modes.
   - Dlib's smart pointers have been deprecated and all of dlib's code has been
     changed to use the std:: version of these smart pointers.  The old dlib
     smart pointers are still present, allowing users to explicitly include
     them if needed, but users should migrate to the C++11 standard version of
     these tools. 
   - Changed the functions that transform between input tensor coordinates and
     output tensor coordinates to use dpoint instead of point. This way, we can
     obtain sub-pixel coordinates if we need them.
   - Upgraded loss_mmod_ to support objects of varying aspect ratio. This
     changes the API for the mmod_options struct slightly.

Bug fixes:
   - Made resize_image() and functions that use it like the pyramid objects
     produce better results when run on float and double images. There was
     needless rounding to integers happening in the bilinear interpolation. Now
     if you work with a float image the entire process will run without integer
     rounding.
   - Made the input_tensor_to_output_tensor() and output_tensor_to_input_tensor() 
     coordinate mappings work on networks that contain skip layers.
   - The input_rgb_image_sized is supposed to be convertible to
     input_rgb_image, which it was in all ways except you couldn't deserialize
     directly like you would expect. This has now been fixed.
   - There was a bug in the concat_ layer's backward() method. It was assigning
     the gradient to previous layers instead of adding the gradient, as required
     by the layer interface specification.  Probably no-one has been impacted
     by this bug, but it's still a bug and has been fixed.
   - Changed the random_cropper so that it samples background patches uniformly
     across scales regardless of the input image size. Previously, if you gave
     really large images or really small images it had a bias towards giving only
     large patches or small patches respectively.
   - Fixed name lookup problem for calls to serialize() on network objects.
   - Fixed double delete in tokenizer_kernel_1.
   - Fixed error in pyramid_down<2> that caused the output image to be a
     little funny looking in some cases.
   - Fixed the visit_layers_backwards() and visit_layers_backwards_range()
     routines so they visit layers in the correct order.
   - Made build scripts work on a wider range of platforms and configurations.
   - Worked around global timer cleanup issues that occur on windows when dlib
     is used in a dll in some situations.
   - Fixed various compiler errors in obscure environments.


Release 19.4

Release date: Mar 07, 2017
Major Changes in this Release:
New Features:

Non-Backwards Compatible Changes:
   - CMake 2.8.12 is now required to build dlib (but only if you use CMake).  

Bug fixes:
   - Fixed a slow memory leak that could occur when using cuDNN.

Other:



Old Release Notes