Category Archives: C++ Code

Higher-Order functions in C++

The other day, I was writing some C++ and found that I was thinking about how to manipulate the data I had as if I was writing F#. It would have been convenient to turn a std::map into an array of tuples, which I could do in F# like this:

let f (xs : map<int,string) =
  let xs = xs |> Map.toArray
  // now treat xs as array of tuples...

There’s no function in STL to do this off the bat – instead, you have to roll your own (not that it’s much code, but it does break your through processes if you have to stop to code this sort of thing every time).

Of course, this is just one of many helpful F# higher-order functions that are provided in the F# Map module – and there are counterparts for each of the collection classes i.e. Array, Set, List etc. In C++, the nearest equivalent is the STL which provides both collection classes and a number of algorithms that operate on them. Better still, from C++11 onwards we have lambdas, which make using STL algorithms much easier. Even so, in most cases, the F# operations seem much more tailored to the sort of data transformation I see at work – our codebase is littered with map/filter/fold operations as people transform/select and accumulate data. Conversely, our C++ codebase is full of … for loops, evidence in my eyes that STL algorithms aren’t as immediately applicable. In fact, the ease of use of higher-order functions was one of the reasons that F# was quickly adopted in my workplace (along with immutability, strong-typing, conciseness, type inference and syntax checking).

I’ve written one-to-one C++ equivalents of the F# module functions that I use the most for Map and Vector – see below. Interestingly, I found that I really did have to ‘engage brain’ to write some of these, particularly Map.filter. For that one, you can’t use the erase-remove idiom because map keys are both const and strictly ordered (whereas for Vector, erase-remove_if implements filter neatly). A library of functions as per my code below would definitely be a productivity boost.

First, I’ve factored some common utilities into namespace Collection:

namespace MusingStudio
{
    namespace Collection
    {
        template <typename C, typename F>
        C& filter( C& collection, F keep_predicate )
        {
            auto erase_predicate = [&pred=keep_predicate]( auto&& x ){ return !pred( std::forward<decltype(x)>(x) ); };
            collection.erase( std::remove_if( collection.begin(), collection.end(), erase_predicate ), collection.end() );
            return collection;
        }
        
        // This form of filter always takes a copy and applies the filter to it
        // - sometimes you want to preserve the original collection
        template <typename C, typename F>
        C filter_copy( const C& collection, F keep_predicate )
        {
            C target;
            std::copy_if( collection.begin(), collection.end(), std::inserter( target, target.end() ), keep_predicate );
            return target;
        }
        
        template <typename C, typename F, typename A>
        A fold( const C& items, F f, A&& init )
        {
            A acc{ std::forward<A>(init) };
            for ( const auto& item : items )
            {
                f( acc, item );
            }
            return acc;
        }
                
        // F( T ) -> T and collection C is mutated
        template <typename C, typename F>
        C& transform( C& items, F f )
        {
            for ( auto& t : items )
            {
                t = f( t );
            }
            
            return items;
        }

    }
}

Next, here are the higher-order functions that I use for Map:

namespace MusingStudio
{
    namespace Map
    {
        // filter_copy takes a copy of the original collection then applies the filter
        template< typename K, typename V, typename F>
        std::map<K,V> filter_copy( const std::map<K,V>& items, F predicate )
        {
            return Collection::filter_copy( items, predicate );
        }
        
        template< typename K, typename V, typename F>
        std::map<K,V>& filter( std::map<K,V>& items, F predicate )
        {
            // NB the erase-remove_if idiom does not work for std::map
            // because the nodes must remain ordered by key.  This is enforced
            // by std::map<K,V> holding keys as const K.  So any assignment
            // to the key (to effecfively re-order the binary tree) fails to compile.
            // http://stackoverflow.com/questions/9515357/map-lambda-remove-if
            
            // instead, manually iterate over the collection, erasing items
            // for which predicate() returns false
            for ( auto it = items.begin(), itEnd = items.end(); it != itEnd; )
            {
                if ( predicate( *it ) )
                {
                    ++it; // ok - keep this item
                }
                else
                {
                    it = items.erase( it );
                }
            }
            
            return items;
        }
        
        template <typename K, typename V>
        auto to_vector( const std::map<K,V>& collection )
        {
            std::vector< std::pair<K,V> > items;
            
            for ( const auto& item : collection )
            {
                items.push_back( std::make_pair( item.first, item.second ) );
            }
            
            return items;
        }
        
        template <typename K, typename V>
        auto keys( const std::map<K,V>& collection )
        {
            std::set< K > items;
            
            for ( const auto& item : collection )
            {
                items.insert( item.first );
            }
            
            return items;
        }
        
        template <typename K, typename V>
        auto values( const std::map<K,V>& collection )
        {
            std::vector< V > items;
            
            for ( const auto& item : collection )
            {
                items.push_back( item.second );
            }
            
            return items;
        }
        
        template<typename K, typename V, typename F, typename A>
        A fold( const std::map<K,V>& items, F f, A&& init )
        {
            return Collection::fold( items, f, std::forward<A>(init) );
        }
        
        // F( std::pair<K,V> ) -> std::pair< L, U >
        // Construct a new std::map<L,U> mapping from (K,V) to (L,U)
        template <typename K, typename V, typename F>
        auto map( const std::map<K,V>& items, F f )
        {
            using KVP = typename std::map<K,V>::value_type;
            using RVP = decltype( f( KVP() ) );
            
            std::map< decltype( RVP().first ), decltype( RVP().second ) > result;
            
            for ( const KVP& kvp : items )
            {
                result.insert( f( kvp ) );
            }
            
            return result;
        }
        
        // F( K, V ) -> V and std::map<K,V> is mutated
        template <typename K, typename V, typename F>
        std::map<K,V>& transform( std::map<K,V>& items, F f )
        {
            using KVP = typename std::map<K,V>::value_type;
            
            for ( const KVP& kvp : items )
            {
                items[kvp.first] = f( kvp.first, kvp.second );
            }
            
            return items;
        }
    }
}

And here are the higher-order functions that I use for Vector:

namespace MusingStudio
{
    namespace Vector
    {
        template< typename T, typename F>
        std::vector<T>& filter( std::vector<T>& items, F predicate )
        {
            Collection::filter( items, predicate );
            return items;
        }
        
        template< typename T, typename F>
        std::vector<T> filter_copy( const std::vector<T>& items, F predicate )
        {
            return Collection::filter_copy( items, predicate );
        }
        
        // Requires F to have signature void( A&, T )
        template< typename T, typename F, typename A>
        A fold( const std::vector<T>& items, F f, A&& init )
        {
            return Collection::fold( items, f, std::forward<A>(init) );
        }
        
        template< typename T, typename P = std::less<T> >
        std::vector<T>& sort( std::vector<T>& items, P compare = P() )
        {
            std::sort( items.begin(), items.end(), compare );
            return items;
        }
        
        template< typename T, typename P = std::less<T> >
        std::vector<T> sort_copy( const std::vector<T>& items, P compare = P() )
        {
            std::vector<T> result( items );
            std::sort( result.begin(), result.end(), compare );
            return result;
        }
        
        // F( T ) -> U, construct a new vector<U>, mapping from T to U
        template <typename T, typename F>
        auto map( const std::vector<T>& items, F f )
        {
            using U = decltype( f(T()) );
            
            std::vector< U > result;
            std::transform( items.begin(), items.end(), std::inserter( result, result.end() ), f );
            return result;
        }
        
        // F( T ) -> T and std::vector<T> is mutated
        template <typename T, typename F>
        std::vector<T>& transform( std::vector<T>& items, F f )
        {
            return Collection::transform( items, f );
        }
    }
}

Here are some unit tests that show how much easier it is to use the Map/Vector functions instead of going directly to STL – I’d argue that this code is comparable to F# for conciseness (although F# code would still benefit from pipelining subsequent operations).

#include <iostream>

#include <gmock/gmock.h>
#include <Vector.hpp>
#include <Map.hpp>

using namespace testing;
using namespace MusingStudio;

TEST( Map, to_vector )
{
    using Mapped = std::map<int, std::string>;
    using Tuples = std::vector<std::pair<int,std::string> >;
    
    Mapped items{ { 1, "Hi" }, { 2, "Bye" } };
    
    EXPECT_EQ( (Tuples{ { 1, "Hi" }, { 2, "Bye" } }), 
      Map::to_vector( items ) );
}

TEST( Map, keys )
{
    using Mapped = std::map<int, std::string>;
    using Keys = std::set<int>;
    
    Mapped items{ { 1, "Hi" }, { 2, "Bye" } };
    
    EXPECT_EQ( (Keys{ 1, 2 }), 
      Map::keys( items ) );
}

TEST( Map, values )
{
    using Mapped = std::map<int, std::string>;
    using Values = std::vector<std::string>;
    
    Mapped items{ { 1, "Hi" }, { 2, "Bye" } };
    
    EXPECT_EQ( (Values{ "Hi", "Bye" }), 
      Map::values( items ) );
}

TEST( Map, filter )
{
    using Mapped = std::map<int, int>;
    
    Mapped items{ {1,1}, {2,4}, {3,9}, {4,16} };
    
    Mapped even_keys{ {2,4},{4,16} };
    auto lambda = []( const auto& keyvaluepair ){ return keyvaluepair.first % 2 == 0; };
    
    // Map::filter will mutate parameter 'items'
    EXPECT_EQ( even_keys, 
      Map::filter( items, lambda ) );
    EXPECT_EQ( 2, items.size() );
}

TEST( Map, filter_copy )
{
    using Mapped = std::map<int, int>;
    
    Mapped items{ {1,1}, {2,4}, {3,9}, {4,16} };
    
    Mapped even_keys{ {2,4},{4,16} };
    auto lambda = []( const auto& keyvaluepair ){ return keyvaluepair.first % 2 == 0; };
    
    // Map::filter_copy creates a copy, so parameter 'items' is untouched
    EXPECT_EQ( even_keys, 
      Map::filter_copy( items, lambda ) );
    EXPECT_EQ( 4, items.size() );
}

TEST( Map, fold )
{
    using Mapped = std::map<int, int>;
    
    Mapped items{ {1,2}, {3,4}, {5,6} };
    
    // Map::fold takes F( A&, pair<K,V> ) -> void
    EXPECT_EQ( 21, 
      Map::fold( items, 
        []( int& acc, const auto& kvp ){ acc += kvp.first + kvp.second; }, 0 ) );
}

TEST( Map, transform_mutable_values_only )
{
    using Transformed = std::map<int, int>;

    // Map::transform over the values, mutating them
    // Takes F(K,V) -> V i.e. the type of the return value must be V
    // "items" must be a named variable because parameter is non-const
    // (we will mutate it)
    Transformed items = { {1,1}, {2,2}, {3,3} };
    
    EXPECT_EQ( (Transformed{ {1,1}, {2,4}, {3,9} }),
      Map::transform( items, 
        []( int _, int v ){ return v*v; } ) );
}

TEST( Map, map_keys_and_values )
{
    using Mapped = std::map<int,double>;
    
    // Map::map over the pairs<key,value>
    // Takes F( pair<K,V> ) -> pair<K',V'> 
    // i.e. both key and value types can change
    auto lambda =
        []( const auto& kvp )
        {
            return std::make_pair( kvp.first + kvp.second,
                                   (double)kvp.second / (double)kvp.first );
        };
    
    // Map::map - new keys and values, not mutating the original collection, 
    // can be passed as unnamed temporary
    EXPECT_EQ( (Mapped{ {2,1}, {6,2} }),
      Map::map( std::map<int,int>{ {1,1}, {2,4} }, lambda ) );
}

TEST( Vector, filter )
{
    std::vector<int> items{ 1,2,3,4,5,4,3,2,1 };

    // Vector::filter will mutate the input collection
    EXPECT_EQ( (std::vector<int>{1,2,2,1}),
      Vector::filter( items,
        [](int i){ return 0 <= i && i <= 2; } ) );
    EXPECT_EQ( 4, items.size() );
}

TEST( Vector, filter_copy )
{
    std::vector<int> items{ 1,2,3,4,5,4,3,2,1 };
    auto untouched_size = items.size();
    
    // Vector::filter_copy creates a copy, so parameter 'items' is untouched
    EXPECT_EQ( (std::vector<int>{1,2,2,1}),
      Vector::filter_copy( items,
        [](int i){ return 0 <= i && i <= 2; } ) );
    EXPECT_EQ( untouched_size, items.size() );
}

TEST( Vector, fold )
{
    std::vector<int> items{ 1, 2, 3, 4, 5, -4, -6, -2, -1 };
    // Vector::fold takes F( A&, T ) -> void
    auto accumulate_squares = []( std::set<int>& acc, int i ){ acc.insert(i*i); };
    std::set<int> expected{1, 4, 9, 16, 25, 36};
    EXPECT_EQ( expected, 
      Vector::fold( items, accumulate_squares, std::set<int>{} ) );
}

TEST( Vector, sort )
{
    // Vector::sort mutates the input, hence input is non-const reference
    std::vector<int> items{1,2,1,3};
    EXPECT_EQ( (std::vector<int>{1,1,2,3}), 
      Vector::sort( items ) );
}

TEST( Vector, sort_copy )
{
    // Vector::sort_copy copies the input collection,
    // so collection parameter is const& (and can be an unnamed temporary)
    EXPECT_EQ( (std::vector<int>{1,1,2,3}), 
      Vector::sort_copy( std::vector<int>{1,2,1,3} ) );
}

TEST( Vector, map )
{
    // Vector::map takes F(T) -> U
    // Input collection is const and a new collection is returned
    EXPECT_EQ( (std::vector<double>{ 1.1, 2.1, 3.1 }),
      Vector::map( std::vector<int>{ 1,2,3 },
        []( int i ){ return (double)i + 0.1; } ) );
}

TEST( Vector, transform )
{
    // Vector::transform takes F(T) -> T
    // Input collection is mutated
    std::vector<int> items{ 1, 2, 3 };
    EXPECT_EQ( (std::vector<int>{ 2, 4, 6 }),
      Vector::transform( items, []( int i ){ return 2*i; } ) );
}

int main(int argc, char* argv[]) 
{    
    InitGoogleMock( &argc, argv );
    return RUN_ALL_TESTS();   
}

Notice that we can bypass immutability in C++, so whereas in F# Map::filter would always create a copy, it could be preferable in C++ to filter in-place. With that in mind, I’ve written both filter and filter_copy variations. There’s a similar dilemma for map operations – if you want free rein over the output types, then use Map::map or Vector::map. But if you want to transform the data in place (sticking to the existing types), use Map::transform or Vector::transform.

That covers the most popular functions for just Map and Vector, but it would be straight-forward to extend the library to cover List, Set and others. Similarly, I’d like to extend it to include higher-order functions like Choose, but I’ll need C++17’s std::optional for that.

1 Comment

Filed under C++, C++ Code, Programming

How to calculate Fibonacci sequence for large numbers

Firstly, here’s a basic implementation of Fibonacci using recursion:

    using Numbers = std::vector<unsigned long long>;
    Numbers cache( 5000, 0 );
    
    unsigned long long fibonacciRecursiveImpl( size_t n, Numbers& cache )
    {
        if ( n == 0 ) return 0ULL;
        if ( n == 1 ) return 1ULL;
        
        if ( cache[n] != 0 ) return cache[n];
        
        auto num = fibonacciRecursiveImpl( n-1, cache ) + fibonacciRecursiveImpl( n-2, cache );
        cache[n] = num;
        return num;
    }
    
    unsigned long long fibonacciRecursive( size_t n )
    {
        return fibonacciRecursiveImpl( n, cache );
    }

This works fine for small numbers (e.g. up to 20). I was interested to know where it would fail first, data overflow (due to the size of the numbers involved) or stack overflow (due to the recursive approach)? I implemented this on a MacBook using Apple LLVM 7.0 and the C++14 flag, but no other special switches set. It turns out that the overflow problems kick in for n > 93, but there was no sign of stack overflows, even up for n ~ 2000.

Even if there was a stack overflow, you could still use recursion, but change to a tail recursive solution (many C++ compilers will eliminate tail calls, essentially turning this into an iterative solution):

    // Implement recursive approach with possibility for "Tail call elimination",
    // avoids any concerns about stack overflow
    unsigned long long fibonacciTailRecursiveImpl( size_t n, unsigned long long prevprev, unsigned long long prev )
    {
        if ( n == 0 ) return prevprev;
        if ( n == 1 ) return prev;
        
        return fibonacciTailRecursiveImpl( n-1, prev, prevprev + prev );
    }
    
    unsigned long long fibonacciTailRecursive( size_t n )
    {
        return fibonacciTailRecursiveImpl( n, 0, 1 );
    }

So how to avoid the data overflow for Fibonacci over n = 93? At that point, you need to introduce a large number type with its own internal representation of large integers and suitable operator+ implementation. I used one such, BigNum, for this HackerRank challenge. The implementation stores the large integer as a string, with each character taking values in the range [0,100]. This reduces the number of digit-by-digit additions by a factor of 10 compared to a straight decimal representation.

I replaced unsigned long long with BigNum in the recursive solution above and verified that it returns the correct answers for n ~ 2000, with no stack overflows. Here, I’ll show it in an iterative solution (if you don’t want to keep a cache around, this is highly memory efficient, because you only need to store the previous two values):

    BigNum fibonacciBigNumIterativeImpl( size_t n )
    {
        BigNum prevprev(0);
        BigNum prev(1);
        
        if ( n == 0 ) return prevprev;
        if ( n == 1 ) return prev;
        
        BigNum num(0);
        for ( size_t i = 2; i <= n; ++i )
        {
            num = prevprev + prev;
            prevprev = prev;
            prev = num;
        }
        return num;
    }
    
    std::string fibonacciBigNumIterative( size_t n )
    {
        auto result = fibonacciBigNumIterativeImpl( n );
        return result.toString();
    }

Leave a comment

Filed under C++, C++ Code, Programming

How to approximate Pi using C++11 random number generators

The other day I learnt a method to approximate Pi if you have a random number generator for the range [0,1]. Consider a unit circle centred on (0,0) in 2D coordinates. Then one quarter of the area of the circle lies in the quadrant formed by x and y in range [0,1]. The area of the quarter circle is Pi * R^2/4, and here, R = 1 (it’s a unit circle). So we can generate a bunch of random 2D points, calculate the ratio between those that fall inside the circle and those in the outer unit square, then multiply by 4 to approximate Pi.
approximatepi
That sounds like a neat test case for the C++11 random number generators, so I thought I’d try it out. It turns out to work pretty well, if you’re prepared to use a sufficiently large number of random values.

    double approxPi( size_t points )
    {
        std::random_device rand_device;
        
        // mersenne_twister_engine is a random number engine 
        // based on Mersenne Twister algorithm.
        std::mt19937 generator( rand_device() );
        
        // We want random values uniformly distributed in [0,1]
        std::uniform_real_distribution<> unif_zero_one(0, 1);
        
        size_t points_inside{0};
        
        for (int i = 0; i < points; ++i )
        {
            auto x = unif_zero_one( generator );
            auto y = unif_zero_one( generator );
            double d = std::sqrt( x*x + y*y );
            
            if ( d <= 1.0 )
                ++points_inside;
        }
                
        return 4.0 * (static_cast<double>(points_inside) / static_cast<double>(points));
    }
}

void testApproximatePi()
{
    SHOULD_BE_APPROX( 3.14159, 0.3, approxPi( 100 ) );
    SHOULD_BE_APPROX( 3.14159, 0.1, approxPi( 1000 ) );
    SHOULD_BE_APPROX( 3.14159, 0.01, approxPi( 100000 ) );
    SHOULD_BE_APPROX( 3.14159, 0.001, approxPi( 10000000 ) );
    
    std::cout << "\n";
}

A typical run gives reasonable approximations once you get over 100,000 points:
screen-shot-2016-09-16-at-13-30-26

Leave a comment

Filed under C++, C++ Code, Programming

Solving large puzzles on HackerRank

I’ve solved quite a few puzzles on HackerRank, HackerRankbut this one had me stumped. The actual algorithm didn’t seem too hard, although there is a bit of a trick to it. The problem I had was extending the solution afterwards to handle large numbers. Usually, it’s enough to use ‘long long’ throughout, but it still wasn’t passing all the test cases.

In the end, I narrowed down the problem to the following code:

  long long maximiseScore( int N )
  {
    std::vector<long long> health( N, 0 );
    for ( size_t i = 0; i < N; ++i ) std::cin >> health[i];
    long long sum = std::accumulate( health.begin(), health.end(), 0 );
    // ...
  }

In case you didn’t spot it, the bug is that std::accumulate has inferred the type of the init parameter from 0 (zero), which is an int. So the sum is calculated as an int, then assigned into our long long variable. The solution is to cast the init to a long long (either using ‘LL’ or static_cast).

    long long sum = std::accumulate( health.begin(), health.end(), 0LL );

Leave a comment

Filed under C++, C++ Code, Programming

How to initialise data concisely in C++

In the past, creating a collection of dates was a chore, because you had to individually insert them into a container (e.g. declaring a vector on one line and then pushing dates into it on subsequent lines). With features like std::initializer_list from C++11 onwards, it’s now much easier to do this concisely.

Here’s some simple, concise code to create dates without all the hassle:

struct Date
{
    std::string Event;

    int Year;
    int Month;
    int Day;
};

void print( const Date& date )
{
    std::cout << date.Event << ": "<< date.Year << "/" << date.Month << "/" << date.Day << "\n";
}

void print( const std::vector<Date>& dates )
{
    for ( auto date : dates )
    {
        print( date );
    }
}

void test()
{
    std::cout << "Print date:\n";
    print( { "Today", 2015, 5, 5 } );

    std::cout << "Print dates:\n";
    print( {
        { "Christmas", 2015, 12, 25 },
        { "Spring Bank Holiday", 2016, 6, 30 }
           } );
}

This style is particularly useful when writing tests – you can write a whole test, including setting up the data, on a single line (or at least, in a single function call).

Another compelling use case comes when creating test cases for graph algorithms. Suppose you have the following data structures for an undirected, weighted graph:

struct Edge
{
    const size_t end1;
    const size_t end2;
    const size_t cost;
};

struct Graph
{
    size_t source;
    size_t nodes;
    std::vector<Edge> edges;
};

Then creating a test graph to pass into an algorithm is as simple as:

shortest_path( { 0, 4, { {0,1,24}, {0,3,20}, {2,0,3}, {3,2,12} } })

Leave a comment

Filed under C++, C++ Code, Programming

How to efficiently find the n’th biggest element in a collection

Looking at std::nth_element the other day, I noticed that it’s complexity is O(n) for a collection of size n, and wondered how that was achieved. A basic implementation of an algorithm to find the ith-biggest element might start by sorting the collection and index to the ith afterwards – complexity O(n*log(n)):

int ith_element_by_sorting( std::vector<int> input, int i )
{
    std::sort( std::begin(input), std::end(input) );
    return input[i];
}

It’s a small step to realise then that you don’t need to sort all n elements of the collection – only the first i elements, so complexity O(n*log(i)):

int ith_element_by_partial_sort( std::vector<int> input, int i )
{
    std::partial_sort( std::begin( input ), std::begin( input ) + i + 1, std::end( input ) );
    return input[i];
}

But the real trick is that you don’t need to do any sorting at all. That’s the approach taken by quickselect, which is the selection sibling to quicksort, and achieves O(n) complexity on average:

int partition( std::vector<int>& values, int left, int right, int pivot )
{
     auto pivotValue = values[ pivot ];

     std::swap( values[pivot], values[right] ); // Move pivot to end
     auto store_pos = left;

     for ( int j = left; j < right; ++j )
     {
         if ( values[j] < pivotValue )
         {
             std::swap( values[ store_pos ], values[j] );
             ++store_pos;
         }
     }

     std::swap( values[right], values[ store_pos ] );  // Move pivot to its final place

     return store_pos;
}

int quickselect( std::vector<int> values, int left, int right, int i )
{
    // Unordered - no need to sort values at all, 
    // instead we recursively partition only the subset of values
    // containing the i'th element, until we have either
    // a) trivial subset of 1
    // b) pivot is moved to exactly the location we wanted 
    while( 1 )
    {
         if ( left == right )
         {
             return values[left];
         }

         // Pick a pivot from middle of values.
         // Better options are a random pivot or median of 3
         auto pivot = (left + right)/2; 

         // Move anything smaller than values[pivot] to the left of pivot,
         // and return updated position of pivot
         pivot = partition( values, left, right, pivot );

         if ( pivot == i )
         {
             return values[i];
         }
         else if ( i < pivot )
         {
             right = pivot - 1;
         }
         else
         {
             left = pivot + 1;
         }
    }
}

int ith_element_by_quickselect( const std::vector<int>& input, int i )
{
    return quickselect( input, 0, input.size()-1, i );
}

int ith_element( const std::vector<int>& input, int i )
{
    if ( i < 0 || i >= input.size() )
    {
        std::ostringstream ss;
        ss << "Input '" << i << "' outside range [0," << input.size() << ")";
        throw std::out_of_range( ss.str() );
    }

    return ith_element_by_quickselect( input, i );
}

Here’s some test code to check that the implementation works:

template <typename F, typename T>
void should_be( T t, F f, const std::string& message )
{
    try
    {
        std::ostringstream ss;

        auto got = f();
        if ( got != t )
        {
            ss << message << " got " << got << ", expected " << t;
            throw std::runtime_error( ss.str() );
        }
        else
        {
            ss << "OK: " << message << ": got " << got;
            std::cout << ss.str() << "\n";
        }
    }
    catch( const std::exception& ex )
    {
        // Report error if either f() threw or we found unexpected value
        std::cout << "ERROR: " << ex.what() << "\n";
    }
}

template <typename F>
void should_throw( F f, const std::string& message )
{
    try
    {
        f();
    }
    catch( const std::exception& ex )
    {
        std::cout << "OK: " << message << ": threw \"" << ex.what() << "\"\n";
        return;
    }

    std::cout << "ERROR: " << message << " should have thrown\n";
}

#define SHOULD_BE( t, expr ) should_be( t, [](){ return expr; }, #expr )
#define SHOULD_THROW( expr ) should_throw( [](){ expr; }, #expr )

void testIthElement()
{
    SHOULD_THROW( ith_element( {}, 0 ) );
    SHOULD_THROW( ith_element( {1,2}, -1 ) );
    SHOULD_THROW( ith_element( {1,2,3}, 3 ) );

    SHOULD_BE( 1, ith_element( {1}, 0 ) );
    SHOULD_BE( 0, ith_element( {0,1,2,3,4,5,6,7,8}, 0 ) );
    SHOULD_BE( 2, ith_element( {0,1,2,3,4,5,6,7,8}, 2 ) );
    SHOULD_BE( 6, ith_element( {5,4,7,6,1,2,0,8,3}, 6 ) );
    SHOULD_BE( 8, ith_element( {5,4,7,6,1,2,0,8,3}, 8 ) );
    SHOULD_BE( 5, ith_element( {5,5,5,5,5,5}, 1 ) );
}

Here’s the output:
QuickSelect

In fact, reviewing old posts on this blog, I found this link that dates back to 2013, when std::nth_element was only required by the standard to have worst-case O(N^2) complexity, with O(N) on average – precisely what you’d get from quickselect. Now though, thanks to introselect, O(N) is possible.

Leave a comment

Filed under C++, C++ Code, Programming

Tech Book: Effective Modern C++, Scott Meyers

EffectiveModernC++When Scott Meyers announced his retirement from C++ duties, I thought I’d get a copy of his latest book. There’s plenty in it to interest even seasoned C++ developers – as always, Scott’s insight and in-depth examples are worth studying. I’ve tried out most of the C++11 features, but still learnt a lot working through this book.

Uniform Initialisation and Auto
Meyers points out that compared to parentheses or just plain “=”, braced initialisation can be used in the most contexts, avoids his famous “C++’s most vexing parse”, and will issue compiler errors if narrowing occurs. However, for types that can take a std::initialiser_list in their constructor, the compiler is required to favour std::initialiser_list over any other interpretation. That’s a deterrent to use of braced initialisation with types that have a std::initializer_list constructor – and also precludes using auto with braced initialisation.

auto x = { 4 }; // type of x is inferred to be std::initializer_list&lt;int&gt;!

How to use the new “= delete” idiom
Meyers recommends declaring deleted methods as public to get better error messages from compilers that check for accessibility before deleted status. He also points out that you can declare non-member functions as deleted (I’d only seen examples of copy-constructor/copy-assignment operator as deleted before).

Use of noexcept on move operations
This one is a classic example of a combination of language features having an unexpected effect. std::vector::push_back offers the strong exception guarantee. It can achieve this using any copy constructor and being sure not to change the original contents of the vector until any new elements or copies have been constructed. Hence, move constructors are only preferred if it is exception-safe to do so – which means they must be marked as noexcept (and if that isn’t appropriate, you just won’t get move semantics pushing instances of your type into a std::vector).

Recursion with constexpr
Having toyed with template meta-programming in the past, this use of constexpr appeals to me:

constexpr int pow( int b, int exp ) noexcept
{
    return (exp == 0 ? 1 : b * pow( b, exp-1 ));
}
constexpr auto eight = pow( 2, 3 );

void testConstExpr()
{
    std::cout &lt;&lt; &quot;2^3 = &quot; &lt;&lt; eight &lt;&lt; &quot;\n&quot;;
}

It’s much more succinct than the equivalent TMP version, and still performs the calculation at compile time.

Summary
In the past, I’ve recommended Effective C++ for developers who have little experience of the language and wish to improve their skills. I think this book is a bit too advanced for that, particularly given the chapters on template type deduction and decltype being in the first part of the book! So read this book if you’ve got a few years’ experience of C++ already, but look at Effective C++ (3rd Edition) to improve on the basics.

Four stars

Leave a comment

Filed under C++, C++ Code, Programming, Tech Book