Wednesday, November 26, 2014

The Python API and C++

Recently, for a job interview task, I was asked to write a Python module with a C or C++ implementation to solve an otherwise simple task. Obviously, I chose C++. While I had never used the Python API before, I found that the existing information on extending Python with C quite sufficient. What surprised me, however, is how little information existed for using C++. A few libraries exist, like Boost.Python, PyCXX, and some utilities that parse C++ to create Python bindings, but I didn't find much in the way of actual information without examining the sources of these libraries.

I will not discuss much why someone would want to implement a Python module in another language (efficiency? better library support for certain tasks? preference?), but why C++? The Python API has basically no type safety--everying is a PyObject *, whether it represents a string, number, or tuple. It requires a considerable amount of boiler-plating--something we can reduce by using the C++ type system. It presents some interesting technical challenges, which are what I will focus on. I will assume some knowledge of the Python API.

Note: I will be basing this off Python 2.7. Yes, Python 3 is newer, but due to incompatibilities, not a replacement, and also not my system default. Also, I have little experience with the Python API, so do not take this article as authoritative. It represents a journal of my experiments.

I have started working on a little utility library (https://github.com/splinterofchaos/py-cxx) for personal use, but for a listing of the code for this article, see the gist: https://gist.github.com/splinterofchaos/b099149a701edfa5948f


Writing a Python Module: The Basics

First, we will want to create a Python module, which alone is rather uninteresting. For a more in-depth study, one should refer to https://docs.python.org/2/extending/extending.html.

Every module requires an init function which communicates to the interpreter what functions, types, and objects this module offers. For now, let's consider a module that counts how many time a certain function gets called.
#include <Python.h>

PyObject *count(PyObject *self, PyObject *args)
{
  static int i = 0;
  PySys_WriteStdout("%i\n", ++i);  // Just like printf.
  return PyInt_FromLong(i);
}

static PyMethodDef countMethods[] = {
  {"count", count, METH_VARARGS, "Returns the number of times called."},
  {NULL, NULL, 0, NULL}
};

PyMODINIT_FUNC initcount()
{
  PyObject *m = Py_InitModule("count", countMethods);
}
See setup.py for building this example.

Here, countMethods contains the defined functions in a {name, c-function, function-type, __doc__-string} structure. count must be a PyCFunction, a function taking self (probably null) and args (argument tuple) parameters and returning an object. METH_VARARGS lets the interpreter know this is a regular function--other types of functions do exist, but more on that later.

The PyMODINIT_FUNC macro tells Python that, obviously, this function initializes the module. Note that even Py_InitModule() returns a regular Python object!

There are several improvements we can make. First, we could write an overloaded function, to_py_int(), that could dispatch between PyInt_FromLong(), PyInt_FromSsize_t(), and friends, but that's rather mundane so I'll be skipping it. More interesting: we can write a function to define methods.

Aside from METH_VARARGS, we can have a function be a METH_KEYWORDS which takes an additional parameter, a dictionary,  and thus not be a PyCFunction,  METH_NOARGS, which must still be a PyCFunction, and may receive a self argument, but always NULL for args, or METH_O, which has an object as self. It may be convenient to write a function that takes a pointer to a specific type instead of the generic PyObject, but by casting we lose certain safety guarantees and it can be easy to do something stupid, like writing a function will the wrong number of arguments or the wrong METH_* variant.
#include <type_traits>
template<typename R, typename...X>
constexpr int arity(R(*)(X...)) {
  return sizeof...(X);
}

template<typename R, typename...X>
constexpr bool returns_PyObject(R(*)(X...)) {
  // Result is either a PyObject, or a subclass of one.
  return std::is_convertible<R, PyObject *>::value;
}

template<typename R, typename...X>
constexpr bool is_PyCFunction(R(*)(X...)) {
  return false;
}

template<>
constexpr bool is_PyCFunction(PyCFunction) {
  return true;
}

template<typename F>
constexpr int method_type(F f) {
  return arity(f) == 3     ? METH_KEYWORDS
       : is_PyCFunction(f) ? METH_VARARGS
                           : METH_O;
}

template<typename F>
constexpr PyMethodDef method_def(const char *name, const char *doc,
                                 int type, F f)
{
  static_assert(arity(F()) == 2 || arity(F()) == 3,
                "Methods must have an arity of 2 or 3");
  static_assert(returns_PyObject(F()), "Methods must return a PyObject *.");
  return {name, (PyCFunction)f, type, doc};
}

template<typename F>
constexpr PyMethodDef method_def(const char *name, const char *doc, F f)
{
  return method_def(name, doc, method_type(f), f);
}

static PyMethodDef countMethods[] = {
  method_def("count", "Returns the number of times called.", count),
  {NULL, NULL, 0, NULL}
};
Note that in order to use static_asserts, we construct an F instead of passing f because f, as a parameter, may not be a constexpr.

Now, we can declare methods in a type-safe manor without having to specify METH_* or lose any safety. While it may be a little limiting (for example, we can't use a lambda to define the method), one can always revert to not using method_def as well.

Note: It may be safe to define a function that takes no arguments and cast it to a PyCFunction, however I don't know that this would be true across all architectures and ABI calling conventions. 

One thing lacking from this example is actually using the args parameter. For that, we will need to use PyArg_ParseTuple().


A Type-Safe PyArg_ParseTuple().

Let's use the example of finding the cross product of two vectors.
#include <Python.h>

#include "Py.h"  // includes MethodDef()

PyObject *cross(PyObject *self, PyObject *args)
{
  float a, b, c;
  float x, y, z;

  if (!PyArg_ParseTuple(args, "(fff)(fff)", &a, &b, &c, &x, &y, &z))
    return nullptr;

  float i = b*z - c*y;
  float j = c*x - a*z;
  float k = a*y - b*x;

  return Py_BuildValue("fff", i, j, k);
}

static PyMethodDef vecMethods[] = {
  MethodDef("cross", "Returns the cross product of two 3D vectors.", cross),
  {NULL, NULL, 0, NULL}
};

PyMODINIT_FUNC initvec()
{
  PyObject *m = Py_InitModule("vec", vecMethods);
}
This lets us write, in Python, cross((a,b,c), (x,y,z)). Even simple functions like this benefit from being written in statically typed languages since, in Python, when one wants to do many operations on some variables, their types must be checked every time, lest you try to add a string to an integer. Here, we do nine operations, but only check the types of the initial six arguments.

PyArg_ParseTuple() is really quite simple; you pass in args and a format string (in this case, using f for float), and pointers to the variables you want to fill. If the tuple doesn't fit the expected format, it sets an error so we can just return NULL. We do our calculation and call Py_BuildValue(), which creates a tuple when given more than one value. Unfortunately, it's very verbose and not type-safe. We can fix that, but first, we must build a format string, preferably at compile time, to pass in.

First, we can use, for convenience, a typedef of std::integer_sequence to build a list of chars.
template<char...cs>
using CharList = std::integer_sequence<char, cs...>;
Then, define mappings for PyArg_ParseTuple.
template<typename...T>
struct CharListConcat;

template<typename T>
struct CharListConcat<T> {
  using type = T;
};

template<typename...U, char...cs, char...cs2>
struct CharListConcat<CharList<cs...>, CharList<cs2...>, U...> {
  using type = typename CharListConcat<CharList<cs..., cs2...>, U...>::type;
};

template<typename...T>
using CharListConcat_t = typename CharListConcat<T...>::type;

template<> struct PTCharListOf<float> {
  using type = CharList<'f'>;
};

template<typename...Ts>
struct PTCharListOf<std::tuple<Ts...>> {
  using type = CharListConcat_t<CharList<'('>,
                                typename PTCharListOf<std::decay_t<Ts>>::type...,
                                CharList<')'>>;
};

template<typename T>
using PTCharListOf_t = typename PTCharListOf<T>::type;
Unfortunately, this strategy is a but limited--we couldn't pass in an std::vector to get the desired affect because we wouldn't know how many elements go into it. A better option would be to add a PyObject * specialization for PTCharListOf and manually check that the result is a list.

template<> struct PTCharListOf<PyObject *> {
  using type = CharList<'O'>;
};
Next, we define a type to build the format:

template<typename...Ts>
struct ParseTupleBuilder { };

template<typename CL, typename T, typename...Ts>
struct ParseTupleBuilder<CL, T, Ts...> {
  using type = ParseTupleBuilder<CharListConcat_t<CL, PTCharListOf_t<T>>,
                                 Ts...>;
  constexpr static const char *fmt = type::fmt;
};

template<char...cs>
struct ParseTupleBuilder<CharList<cs...>> {
  using type = CharList<cs...>;

  static const char fmt[sizeof...(cs) + 1];
};

template<char...cs>
const char ParseTupleBuilder<CharList<cs...>>::fmt[] = { cs..., '\0' };

template<typename...Ts>
constexpr const char *ParseTupleFormat(Ts...) {
  return ParseTupleBuilder<CharList<>, std::decay_t<Ts>...>::fmt;
}
One interesting thing: When I defined fmt inside ParseTupleBuilder, I got an error from inside Python on typing "import vec" claiming that fmt's constructor had not been defined. The Python docs warn that static global variables with constructors may not be used if Python was built with a C compiler, but defining fmt outside the struct seems to fix this.

Finally, we can start defining ParseTuple(). The strategy I chose was to build an std::tuple of arguments to send to PyArg_ParseTuple() and examine each argument in a helper function. This will require two helpers, defined below, apply_tuple() and map_tuple().
template<typename F, typename T, size_t...Is>
decltype(auto) apply_tuple(F&& f, T&& t, std::index_sequence<Is...>) {
  return std::forward<F>(f)(std::get<Is>(std::forward<T>(t))...);
}

template<typename F, typename T, size_t...Is>
decltype(auto) map_tuple(F&& f, T&& t, std::index_sequence<Is...>) {
  return std::make_tuple(std::forward<F>(f)(std::get<Is>(std::forward<T>(t)))...);
}

template<typename F, typename...Ts,
         typename Is = std::make_index_sequence<sizeof...(Ts)>>
decltype(auto) map_tuple(F&& f, std::tuple<Ts...> &t) {
  return map_tuple(std::forward<F>(f), t, Is());
}

template<typename...Bound,
         typename Indicies = std::make_index_sequence<sizeof...(Bound)>>
bool ParseTuple_impl(std::tuple<Bound...> &&bound) {
  return apply_tuple(PyArg_ParseTuple, bound, Indicies());
}

template<typename...Bound, typename Arg, typename...Args>
bool ParseTuple_impl(std::tuple<Bound...> &&bound, Arg &a, Args &...as) {
  return ParseTuple_impl(std::tuple_cat(std::move(bound), std::make_tuple(&a)),
                          as...);
}

template<typename...Bound, typename...Args>
bool ParseTuple_impl(std::tuple<Bound...> &&bound, Optional, Args &...as) {
  return ParseTuple_impl(std::move(bound), as...);
}

template<typename...Bound, typename...Ts, typename...Args>
bool ParseTuple_impl(std::tuple<Bound...> &&bound, std::tuple<Ts &...> &t,
                     Args &...as) {
  auto &&mapped = map_tuple([](auto &x) { return &x; }, t);
  return ParseTuple_impl(std::tuple_cat(bound, std::move(mapped)),
                         as...);
}

template<typename...Args>
bool ParseTuple(PyObject *args, Args &&...as) {
  return ParseTuple_impl(std::make_tuple(args, ParseTupleFormat(as...)),
                          as...);
}
Before getting back to our cross product function, we will also want a BuildTuple() function. Please excuse the repetitive nature of this code.
template<typename...Bound,
         typename Indicies = std::make_index_sequence<sizeof...(Bound)>>
PyObject *BuildValue_impl(std::tuple<Bound...> &&bound) {
  return apply_tuple(Py_BuildValue, bound, Indicies());
}

template<typename...Bound, typename Arg, typename...Args>
PyObject *BuildValue_impl(std::tuple<Bound...> &&bound, Arg a, Args ...as) {
  return BuildValue_impl(std::tuple_cat(std::move(bound), std::make_tuple(a)),
                         as...);
}

template<typename...Bound, typename...Args>
PyObject *BuildValue_impl(std::tuple<Bound...> &&bound, Optional, Args &...as) {
  return BuildValue_impl(std::move(bound), as...);
}

template<typename...Bound, typename...Ts, typename...Args>
PyObject *BuildValue_impl(std::tuple<Bound...> &&bound, std::tuple<Ts...> &t,
                          Args &...as) {
  return BuildValue_impl(std::tuple_cat(bound, std::move(t)), as...);
}

template<typename...Args>
PyObject *BuildValue(Args &...as) {
  return BuildValue_impl(std::make_tuple(ParseTupleFormat(as...)),
                          as...);
}
And finally, getting back to our cross product...
PyObject *cross(PyObject *self, PyObject *args)
{
  float a, b, c;
  float x, y, z;

  if (!ParseTuple(args, std::tie(a,b,c), std::tie(x,y,z)))
    return nullptr;

  float i = b*z - c*y;
  float j = c*x - a*z;
  float k = a*y - b*x;

  return BuildValue(i, j, k);
}
That sure was a lot of work, but it created a simple interface that's hard to use improperly.

Extending Python Types

Probably the cornerstone of extending Python itself would be to define new types that interact well with the existing Python infrastructure. For efficiency's sake, the more variables we can statically type and hold in a struct, the better. The Python docs suggest extending a type this way:
typedef struct {
    PyObject_HEAD
    ...
} MyType;
The macro, PyObject_HEAD, contains fields common to any Python Object to ensure that a casting MyType pointer to a PyObject is valid. This is a common technique for representing inheritance in C, however, we can get the same affect in C++ by using inheritance.

Also, every Python type requires an accompanying PyTypeObject, which is also a PyObject. The PyTypeObject stores lots of runtime information about a type including what function to use for allocation, to convert to a string, its methods, its base class, how to deallocate it, and more.

We can use a constructor for our extension type, but it may be wisest not to. One of the fields of the type object, tp_alloc, defines how to allocate memory for this type, including setting the reference count to one,  specifying the ob_type field (a member of PyObject), and a few other things. For example, it must work, even for classes that inherit from our custom type. It relates enough to Python's internals that I think it best be left alone, and can be left as NULL in the PyTypeObject without trouble.

More interesting would be tp_new, which must point to a function that calls tp_alloc and initializes the memory, and must be defined in order to create instances of our new type. We can define tp_new to use a placement new for objects in our type that require construction.

We can generalize that an extension of PyObject will look like this:
template<typename T>
struct Extention : PyObject
{
  static PyTypeObject type;

  T ext;

  T       &get()       & { return this->ext; }
  const T &get() const & { return this->ext; }

  T       *ptr()       & { return &this->ext; }
  const T *ptr() const & { return &this->ext; }
};
We can define a default tp_new and tp_dealloc and initialize type like so:
template<typename T,
         typename = std::enable_if_t<std::is_default_constructible<T>::value>>
newfunc default_new()
{
  return [](PyTypeObject *type, PyObject *args, PyObject *kwds)
  {
    using Self = Extention<T>;
    Self *self = (Self *) type->tp_alloc(type, 0);
    if (self)
      new (self->ptr()) T();
    return (PyObject *) self;
  };
}

template<typename T,
         typename = std::enable_if_t<!std::is_default_constructible<T>::value>>
auto default_new() {
  return [](PyTypeObject *type, PyObject *args, PyObject *kwds)
  {
    return type->tp_alloc(type, 0);
  };
}

template<typename T>
PyTypeObject Extention<T>::type = {
  PyObject_HEAD_INIT(NULL)
  0,                         // ob_size
  0,                         // tp_name
  sizeof(Extention<T>),      // tp_basicsize
  0,                         // tp_itemsize
  destructor([](PyObject *self) {
    ((Extention *) self)->get().T::~T();
    self->ob_type->tp_free(self);
  }),
  0,                         // tp_print
  0,                         // tp_getattr
  0,                         // tp_setattr
  0,                         // tp_compare
  0,                         // tp_repr
  0,                         // tp_as_number
  0,                         // tp_as_sequence
  0,                         // tp_as_mapping
  0,                         // tp_hash 
  0,                         // tp_call
  0,                         // tp_str
  0,                         // tp_getattro
  0,                         // tp_setattro
  0,                         // tp_as_buffer
  Py_TPFLAGS_DEFAULT,        // tp_flags
  0,                         // tp_doc 
  0,                         // tp_traverse 
  0,                         // tp_clear 
  0,                         // tp_richcompare 
  0,                         // tp_weaklistoffset 
  0,                         // tp_iter 
  0,                         // tp_iternext 
  0,                         // tp_methods 
  0,                         // tp_members 
  0,                         // tp_getset 
  0,                         // tp_base 
  0,                         // tp_dict 
  0,                         // tp_descr_get 
  0,                         // tp_descr_set 
  0,                         // tp_dictoffset 
  0,                         // tp_init 
  0,                         // tp_alloc 
  default_new<T>(),          // tp_new
};
PyTypeObject does have a few more fields, but the compiler sets them to 0 for us. We do, however, have to set tp_basicsize in order for the right amount of memory to be allocated. Since a type in C++ may not be default-constructible, default_new() may return a function that does not construct the object; this must be done in tp_init.

Now, returning to the cross product example, consider this:
struct Vec {
  float x, y, z;
};

using PyVec = Extention<Vec>;

int init_vec(PyVec *self, PyObject *args, PyObject *)
{
  Vec &v = self->get();
  if (!ParseTuple(args, v.x, v.y, v.z))
    return -1;
  return 0;
}

PyObject *vec_str(PyVec *self)
{
  return PyString_FromString(("<"  + std::to_string(self->get().x) +
                              ", " + std::to_string(self->get().y) +
                              ", " + std::to_string(self->get().z) +
                              ">").c_str());
}

PyMODINIT_FUNC initvec()
{
  PyVec::type.tp_name = "vec.Vec";
  PyVec::type.tp_init = (initproc) init_vec;
  PyVec::type.tp_repr = PyVec::type.tp_str = (reprfunc) vec_str;
  if (PyType_Ready(&PyVec::type) < 0)
    return;

  PyObject *m = Py_InitModule("vec", vecMethods);
  if (!m)
    return;

  Py_INCREF(&PyVec::type);
  PyModule_AddObject(m, "Vec", (PyObject *) &PyVec::type);
}
Note that tp_repr is used to display the result of evaluating an expression, and tp_str is used for printing. tp_init is used to construct our value and relates to Vec.__init__() in Python. PyType_Ready() finalizes the type and fills in some of the missing tp_* fields. We add the type to the module as a global object and increment its reference count so Python doesn't try to destruct it. For brevity, I decided not to include functions to check the type safety of the initproc and reprfunc casts.

Since Vec is default constructible, we only need to worry about assigning the members in the init function.

And now, cross looks like this:
PyObject *cross(PyObject *self, PyObject *args)
{
  PyObject *o1, *o2;
  if (!ParseTuple(args, o1, o2))
    return nullptr;

  // Ensure o1 and 2 are the right types.
  if (!PyType_IsSubtype(o1->ob_type, &PyVec::type) ||
      !PyType_IsSubtype(o2->ob_type, &PyVec::type))
    return nullptr;
  
  Vec &v = ((PyVec *) o1)->get(), &w = ((PyVec *) o2)->get();
  float i = v.y*w.z - v.z*w.y;
  float j = v.z*w.x - v.x*w.z;
  float k = v.x*w.y - v.y*w.x;

  PyObject *ret = PyVec::type.tp_new(&PyVec::type, nullptr, nullptr);

  PyObject *val = BuildValue(i, j, k);
  init_vec((PyVec *) ret, val, nullptr);
  Py_DECREF(val);

  return ret;
}


Conclusions

Despite this being quite a long article, it has only touched the surface of how the Python API can be extended. There are many restrictions and it certainly puts a cramp on C++'s style, but the moral of this story is that just because you need to work with a C API doesn't mean you can't use modern C++ techniques.

Tuesday, February 4, 2014

Clang 3.4 and C++14

With each new release, gcc and clang add on more C++11 and C++14 features. While clang has been behind on some features, though ahead on others, they now claim to have C++1y all worked out.

This article is not comprehensive.
Clang's 3.4 C++ release notes:  http://llvm.org/releases/3.4/tools/clang/docs/ReleaseNotes.html#id1
libc++'s C++1y status: http://libcxx.llvm.org/cxx1y_status.html

Note: To compile these examples requires the flags, "-stdlib=libc++" and "-std=c++1y".

 

 

Variable templates.


This feature, from N3651, took me most be surprise, but it also seems quite obvious. In the simplest example, let def<T> be a variable that represents the default-constructed value of any type, T.

template<typename T>
constexpr T def = T();
 
auto x = def<int>; // x = int()
auto y = def<char>; // y = char() 

The proposal uses the example of pi, where it may be more useful to store it as a float or double, or even long double. By defining it as a template, one can have precision when needed and faster, but less precise, operations otherwise.

For another example, consider storing a few prime numbers, but not specifying the type of their container.

template<template<typename...> class Seq>
Seq<int> primes = { 1, 2, 3, 5, 7, 11, 13, 17, 19 };

auto vec = primes<std::vector>;
auto list = primes<std::list>;
(gist)

Also, the standard library contains many template meta-functions, some with a static value member. Variable templates help there, too.

template<typename T, typename U>
constexpr bool is_same = std::is_same<T,U>::value;

bool t = is_same<int,int>;   // true
bool f = is_same<int,float>; // false
(std::is_same)

But since variable templates can be specialized just like template functions, it makes as much sense to define it this way:

template<typename T, typename U>
constexpr bool is_same = false;

template<typename T>
constexpr bool is_same<T,T> = true;
(gist)

Except for when one requires that is_same refers to an integral_constant.

One thing worries me about this feature: How do we tell the difference between template meta-functions, template functions, template function objects, and variable templates? What naming conventions will be invented? Consider the above definition of is_same and the fallowing:

// A template lambda that looks like a template function.
template<typename T>
auto f = [](T t){ ... };

// A template meta-function that might be better as a variable template.
template<typename T>
struct Func : { using value = ...; };

They each has subtly different syntaxes. For example, N3545 adds an operator() overload to std::integral_constant which enables syntax like this: bool b = std::is_same<T,U>(), while N3655 adds std::is_same_t<T,U> as a synonym for std::is_same<T,U>::value. (Note: libc++ is missing std::is_same_t.) Even without variable templates, we have now three ways to refer to the same thing.

Finally, one problem I did have with it is I wrote a function like so:

template<typename T>
auto area( T r ) {
    return pi<T> * r * r;
};

and found that clang thought pi<T> was undefined at link time and clang's diagnostics did little to point that out.

/tmp/main-3487e1.o: In function `auto $_1::operator()<Circle<double> >(Circle<double>) const':
main.cpp:(.text+0x5e3d): undefined reference to `_ZL2piIdE'
clang: error: linker command failed with exit code 1 (use -v to see invocation

I solved this by explicitly instantiating pi for the types I needed by adding this to main:

pi<float>;
pi<double>;

Why in main and not in global scope? When I tried it right below the definition of pi, clang thought I wanted to specialize the type. Finally, attempting template<> pi<float>; left the value uninitialized. This is a bug in clang, and has been fixed. Until the next release, variable templates work as long as only non-template functions refer to them.

 

 

Generic lambdas and generalized capture.


Hey, didn't I already do an article about this? Well, that one covers Faisal Vali's fork of clang based off of the N3418, which has many more features than this iteration based off the more conservative N3559. Unfortunately it lacks the terseness and explicit template syntax (i.e. []<class T>(T t) f(t)), but it maintains automatic types for parameters ([](auto t){return f(t);}).

Defining lambdas as variable templates helps, but variable templates lack the abilities of functions, like implicit template parameters. For the situations where that may be helpful, it's there.

template<typename T>
auto convert = [](const auto& x) {
    return T(x);
};
(gist)

Also, previously, clang couldn't capture values by move or forward into lambdas, which prohibited capturing move-only types by anything other than a reference. Transitively, that meant many perfect forwarding functions couldn't return lambdas.

Now, initialization is "general", to some degree.

std::unique_ptr<int> p = std::make_unique<int>(5);
auto add_p = [p=std::move(p)](int x){ return x + *p; };
std::cout << "5 + 5 = " << add_p(5) << std::endl;
(See also: std::make_unique)

Values can also be copied into a lambda using this syntax, but check out Scott Meyer's article for why [x] or [=] does not mean the same thing as [x=x] for mutable lambdas. (http://scottmeyers.blogspot.de/2014/02/capture-quirk-in-c14.html)

Values can also be defined and initialized in the capture expression.

std::vector<int> nums{ 5, 6, 7, 2, 9, 1 };
 
auto count = [i=0]( auto seq ) mutable {
    for( const auto& e : seq )
        i++; // Would error without "mutable".
    return i;
};

gcc has had at least partial support for this since 4.5, but should fully support it in 4.9.

 

 

Auto function return types.


This is also a feature gcc has had since 4.8 (and I wrote about, as well), but that was based off of N3386, whereas gcc 4.9 and clang 3.4 base off of N3638. I will not say much here because this is not an entirely new feature, not much has changed, and it's easy to groc.

Most notably, the syntax, decltype(auto), has been added to overcome some of the shortcomings of auto. For example, if we try to write a function that returns a reference with auto, a value is returned. But if we write it...

decltype(auto) ref(int& x) {
    return x;
}

decltype(auto) copy(int x) {
    return x;
} 
(gist)

Then a reference is returned when a is given, and a copy when a value is given. (Alternatively, the return type of ref could be auto&.)

 

 

More generalized constexprs.


The requirement that constexprs be single return statements worked well enough, but simple functions that required more than one line could not be constexpr. It sometimes forced inefficient implementations in order to have at least some of its results generated at compile-time, but not always all. The factorial function serves as a good example.

constexpr unsigned long long fact( unsigned long long x ) {
    return x <= 1 ? 1ull : x * fact(x-1);
}

but now we can write...

constexpr auto fact2( unsigned long long x ) {
    auto product = x;
    while( --x ) // Branching.
        product *= x; // Variable mutation.
    return product;
}
(gist)

This version may be more efficient, both at compile time and run time.

The accompanying release of libc++ now labels many standard functions as constexpr thanks to N3469 (chrono), 3470 (containers), 3471 (utility), 3302 (std::complex), and 3789 (functional).

Note: gcc 4.9 does not yet implement branching and mutation in constexprs, but does include some of the library enhancements.

 

 

std::integer_sequence for working with tuples.


Although this library addition may not be of use to everyone, anyone who has attempted to unpack a tuple into a function (like this guy or that guy or this one or ...) will appreciate N3658 for "compile-time integer sequences". Thus far, no standard solution has existed. N3658 adds one template class, std::integer_sequence<T,t0,t1,...,tn>, and std::index_sequence<t0,...,tn>, which is an integer_sequence with T=size_t. This lets us write an apply_tuple function like so:


template<typename F, typename Tup, size_t ...I>
auto apply_tuple( F&& f, Tup&& t, std::index_sequence<I...> ) {
    return std::forward<F>(f) (
         std::get<I>( std::forward<Tup>(t) )... 
    );
}
(See also: std::get)

For those who have not seen a function like this, the point of this function is just to capture the indexes from the index_sequence and call std::get variadically. It requires another function to create the index_sequence.

N3658 also supplies std::make_integer_sequence<T,N>, which expands to std::integer_sequence<T,0,1,...,N-1>, std::make_index_sequence<N>, and std::index_sequence_for<T...>, which expands to std::make_index_sequence<sizeof...(T)>.


// The auto return type especially helps here.
template<typename F, typename Tup >
auto apply_tuple( F&& f, Tup&& t ) {
    using T = std::decay_t<Tup>; // Thanks, N3655, for decay_t.

    constexpr auto size = std::tuple_size<T>(); // N3545 for the use of operator().
    using indicies = std::make_index_sequence<size>; 

    return apply_tuple( std::forward<F>(f), std::forward<Tup>(t), indicies() ); 
}
(See also: std::decay, std::tuple_size, gist

Unfortunately, even though the proposal uses a similar function as an example, there still exists no standard apply_tuple function, nor a standard way to extract an index_sequence from a tuple. Still, there may exist several conventions for applying tuples. For example, the function may be the first element or an outside component. The tuple may have an incomplete argument set and require additional arguments for apply_tuple to work.

Update: Two library proposals in the works address this issue: N3802 (apply), and N3416 (language extension: parameter packs).

 

 

experimental::optional.


While not accepted into C++14, libc++ has an implementation of N3672's optional hidden away in the experimental folder. Boost fans may think of it as the standard's answer to boost::optional as functional programers may think of it as like Haskell's Maybe.

Basically, some operations may not have a value to return. For example, a square root cannot be taken from a negative number, so one might want to write a "safe" square root function that returned a value only when x>0.


#include <experimental/optional>

template<typename T>
using optional = std::experimental::optional<T>;

optional<float> square_root( float x ) {
    return x > 0 ? std::sqrt(x) : optional<float>();
}
(gist)

Using an optional is simple because they implicitly convert to bools and act like a pointer, but with value semantics (which is incidentally how libc++ implements it). Without optional, one might use a unique_ptr, but value semantics on initialization and assignment make optional more convenient.


auto qroot( float a, float b, float c ) 
    -> optional< std::tuple<float,float> >
{
    // Optionals implicitly convert to bools.
    if( auto root = square_root(b*b - 4*a*c) ) {
        float x1 = -b + *root / (2*a);
        float x2 = -b - *root / (2*a);
        return {{ x1, x2 }}; // Like optional{tuple{}}.
    }
    return {}; // An empty optional.
}  
(gist)

 

Misc. improvements.


This version of libc++ allows one to retrieve a tuple's elements by type using std::get<T>.


std::tuple<int,char> t1{1,'a'};
std::tuple<int,int>  t2{1,2}; 
int x = std::get<int>(t1); // Fine.
int y = std::get<int>(t2); // Error, t2 contains two ints.


Clang now allows the use of single-quotes (') to separate digits. 1'000'000 becomes 1000000, and 1'0'0 becomes 100. (example) (It doesn't require that the separations make sense, but one cannot write 1''0 or '10.)

libc++ implements N3655, which adds several template aliases in the form of std::*_t = std::*::type to <type_traits>, such as std::result_of_t, std::is_integral_t, and many more. Unfortunately, while N3655 also adds std::is_same_t (see the top of the 7th page), libc++ does not define it. I do not know, but I believe this may be an oversight that will be fixed soon as it requires only one line.

N3421 adds specializations to the members of <functional>. If one wanted to send an addition function into another functions, one might write f(std::plus<int>(),args...), but we new longer need to specify the type and can instead write std::plus<>(). This instantiates a function object that can accept two values of any type to add them. Similarly, std::greater<>, std::less<>, std::equal_to<>, etc...

 

 

Conclusions.


This may not be the most ground-breaking release, but C++14 expands on the concepts from C++11, improves the library, and adds a few missing features, and I find it impressive that the clang team has achieved this so preemptively. I selected to talk about the features I thought were most interesting, but I did not talk about, for example, sized deallocation, std::dynarray (<experimental/dynarry>), some additional overloads in <algorithm>, or Null Forward Iterators, to name a few. See the bottom for links to the full lists.

The GNU team still needs to do more work to catch up to clang. If one wanted to write code for both gcc 4.9 and clang 3.4, they could use generic lambdas, auto for return types, but not variable templates or generalized constexprs. For the library, gcc 4.9 includes std::make_unique (as did 4.8), the N3412 specializations in <functional>, integer sequences, constexpr library improvements, even experimental::optional (though I'm not sure where), and much more. It may be worth noting it does not seem to include the <type_traits> template aliases, like result_of_t.

See clang's full release notes related to C++14 here: http://llvm.org/releases/3.4/tools/clang/docs/ReleaseNotes.html#id1
For libc++'s improvements, see: http://libcxx.llvm.org/cxx1y_status.html
gcc 4.9's C++14 features: http://gcc.gnu.org/projects/cxx1y.html
And gcc's libstdc++ improvements:  http://gcc.gnu.org/onlinedocs/libstdc++/manual/status.html#status.iso.2014

The code I wrote to test these features: https://gist.github.com/splinterofchaos/8810949