Roger Orr gave this month’s ACCU presentation on Making Templates Easier in C++. He showed two techniques that people commonly use to tailor template implementations for specific types: Tag Dispatch and SFINAE (via enable_if).
With Tag Dispatch, you can switch to different implementations of a template function using traits classes based on one of the input parameter types (e.g. use std::iterator_traits to target a faster implementation for random access iterators). The downside is that you often have to duplicate code across the different implementations.
With SFINAE, you can use typedefs within a parameter type to disable particular overloads. E.g. STL containers have a ::value_type typedef, so you can use that to differentiate between collections and scalar inputs. The downside is that you sometimes have to add additional, defaulted template parameters to allow the compiler to distinguish between otherwise identical template definitions.
Roger then introduced
constexpr if from C++17 and concepts from C++20.
The advantages of
constexpr if are that it can be used both inside and outside templates and specialisations can be defined inline. Any code that would not compile can be put inside a
constexpr if and will be discarded. This seems more straightforward than the recursive template solutions Roger showed earlier in the talk.
Concepts are intended to help define the requirements of a template in a way visible to the compiler as well as the developer. Reusing concept definitions should leave to a domain-specific language that helps within a project. Better still, use of a template parameter type that doesn’t satisfy concept requirements will generate a more helpful error message than if SFINAE were used to achieve the constraints.
The finale was an overview of the new SpaceShip operator, !
The video is now available on the SkillsMatter website.
I was lucky to get a ticket to hear Andrew Blake’s Lovelace lecture, on the subject of “Machines that (learn to) See”.
Machine vision works nowadays. Machines can: navigate using vision; separate object from background; recognise a wide variety of objects, and track their motion. These abilities are great spin-offs in their own right, but are also part of an extended adventure in understanding the nature of intelligence through visual perception.
The speaker was Laboratory Director at Microsoft Research, Cambridge and his team was behind the the Kinect technology. He is now Research Director at the Turing Institute.
The lecture covered the history of machine vision over the last 50 years, the rise and fall of different approaches to AI over the decades, and finally the recent successes of analysis-by-synthesis and empirical recognisers.
Phil Nash organised another C++ London meet-up at SkillsMatter last week. The first talk was by Pete Goldsborough, who gave a rapid overview of the Clang tooling libraries. The second talk was by Kris Jusiak, who talked through the motivation and usage of his Boost::DI dependency injection library. This was more relevant to my work because Kris’s example showed how Boost.DI aims to reduce the overhead in setting up test scenarios for GTest/GMock. I’ve been pretty happy with the way my unit tests look so far, but next time I’ll definitely look at whether his injector object could simplify my code.
Maksim gave a very interesting presentation on Machine Learning, from his perspective as a physicist.
Machine Learning, AI and NLP are some of the most exciting emerging technologies. They are becoming ubiquitous and will profoundly change the way our society functions. In this talk I hope I can provide a unique perspective, as someone who has entered the field coming from a more traditional Physics background.
Physics and Machine Learning have much in common. I will explain how the two fields relate and how a physical point of view can help elucidate many ML concepts. I will show how we can use Python code to generate illustrative visualizations of Machine Learning algorithms. Using these visual tools I will demonstrate SVMs, overfitting, clustering and dimensional reduction. I will explain how intution, common sense and careful statistics matter much when doing Machine Learning, and I’ll describe some tools used in production.
Maksim used Jupyter Notebooks for the demonstration parts of his talk. It’s a great way to show snippets of code as well as plotting charts – I’ve also been using it for a Python library that I’m working on.
The big take-away was that the audience should think of machine learning as very accessible – although there are hard problems left to research, there are a lot of materials available on the internet and much can be understood readily, especially from a visual perspective.
This evening’s presentation at the Institute of Engineering and Technology was sponsored by Hitachi on the subject of The Cloud.
As the Public Cloud is seeing explosive growth for modern internet based business and their web native applications, how can traditional IT originations with a more traditional IT landscape benefit from some of these trends whilst maintaining their legacy?
Neil Lewis explained that, despite years at the forefront of Data Services, Hitachi Data Systems is now re-positioning itself as a Cloud Solutions provider, rather than solely provisioning private infrastructure and software support to enterprises. Whether they can compete with Amazon Web Services or Microsoft Azure, time will tell – but Hitachi have decided to adapt rather than see their business model become irrelevant.
Phil Nash presented his ideas on functional C++ to a packed ACCU meeting a couple of weeks ago. He kindly provided the slides on his website.
For the uninitiated, the functional style is often quite a shock, but having written F# for some time, I’m in favour of “modelling computations as evaluations of expressions” as Phil presented it, or the declarative style as it’s often described. I wrote about Higher-Order Functions in C++ recently and Phil touched on that as well.
One of the highlights of the talk was the section on persistent data structures, which share as much of the previous state as possible whenever additional elements are added. For example, an associative binary tree could have a new element added, but retain links to the bulk of the original tree. There are challenges to stay balanced, but often the benefits can be worth it (e.g. a red-black, persistent tree that’s thread-safe because all the data is immutable). Phil also presented a Trie hybrid with hashing – a persistent tree structure, with performance similar to unordered_map, for which the hashing ensures no re-balancing is required.
The finale was a demonstration of pipelining for C++, based on std::optional (available from C++17). The recommendation was to watch Eric Niebler’s Ranges talk from CppCon 2015 for more details.
This evening’s lecture at the IET was given by Chris Aylett of the Motorsport Industry Association. Chris gave a fast-paced overview of the work of motorsport engineers within their own industry and the increasing crossover into other sectors. He is a fan of horizontal innovation, the application of under-used skills and capacity within a firm to satisfy demand from clients in other industries.
This is particularly appropriate for the world-class unique capabilities of R&D-based motorsport suppliers in the UK who are able to resolve disparate engineering problems, and do so very quickly.
Particular examples were given by speakers from Wirth Research, Prodrive and Lentus Composites. The latter were responsible for the design of the Team GB track bikes which did rather well at the Rio Olympics – having been developed in just 13 months.
There was also plenty to reference from the inspirational life story of Sir Henry Royce. Despite having only one year of formal schooling, he became an apprentice engineer and ultimately started his own business making cranes. Not only did he expand into making motor cars and design the first aero-engine to fly over 400mph (which was developed into the famous Rolls-Royce Merlin engine in WWII Spitfires) – he also designed the bayonet lightbulb.