Cnode communication

In the previous post, we learned what Cnodes are and how to start one that connects back to our Erlang/Elixir node. Now let’s have a look at how to send and receive messages to our Cnode to get it to do something useful.

If you haven’t already, check out the code sample elixir_c_node repository.

Sending messages to a Cnode

Once the Cnode is connected to our Erlang/Elixir node, it appears as a regular node. Sending messages to it can be done as you would by simply using send/2:

send({nil, :"cnode_name@hostname"}, {:ping, request})

The first argument of send/2 is the destination, which in the case of internode communication is a tuple with the registered name of the process (we haven’t registered the process in our example, so it is nil) and the node name (an atom in the form of :"cnode_name@hostname").

The second argument is of course the message.

Note that, unlike a classic Erlang/Elixir node, you cannot directly spawn a new arbitrary Erlang function inside the Cnode using Node.spawn/2. You must send a message to a running C process, which then executes some C code.

Crossing over to the Cnode

Handling the message

In our sample C++ program, message handling is done in a while loop:

// ...

// Main loop
while (true) {
    erlang_msg msg;
    x_in.index = 0;

    // 5. Block and wait for a message
    int res = ei_xreceive_msg(fd, &msg, &x_in);

    if (res == ERL_TICK) {
        // Heartbeat, ignore
        continue;
    } else if (res == ERL_ERROR) {
        std::cerr << "Error receiving message" << std::endl;
        break;
    } else if (res <= 0) {
        std::cerr << "Shutting down" << std::endl;
        break;
	}
    std::cout << msg.msgtype << std::endl;
    // Only care about normal messages (ERL_SEND or ERL_REG_SEND)
    if (msg.msgtype == ERL_SEND || msg.msgtype == ERL_REG_SEND) {
        std::cout << "Received message" << std::endl;

        // Attempt to decode a tuple: {command, data}
        int index = 0;
        int version;
        ei_decode_version(x_in.buff, &index, &version);

        int arity;
        char command[MAXATOMLEN];
        if (ei_decode_tuple_header(x_in.buff, &index, &arity) == 0 && arity == 2) {
            if (ei_decode_atom(x_in.buff, &index, command) == 0 && strcmp(command, "ping") == 0) {
               std::cout << "Data: ";
               ei_print_term(stdout, x_in.buff, &index);
               std::cout << std::endl;

               // Build reply {ok, "pong"}
               ei_x_buff x_out;
               ei_x_new_with_version(&x_out);
               ei_x_encode_tuple_header(&x_out, 2);
               ei_x_encode_atom(&x_out, "ok");
               ei_x_encode_string(&x_out, "pong");

               // Send back to the sender
               ei_send(fd, &msg.from, x_out.buff, x_out.index);
               
               ei_x_free(&x_out);
               std::cout << "Reply sent" << std::endl;
            } else {
                std::cout << "Received unexpected command" << std::endl;
            }
        } else {
            std::cerr << "Received unexpected message format" << std::endl;
        }
    }
}

ei_x_free(&x_in);
close(fd);

You might have noticed that handling a message in C is a bit more… “involved”. While Elixir gives you beautiful pattern matching out of the box, in C-land, we have to manually decode things.

Here’s the breakdown of what’s happening in that while loop:

  1. The Wait: ei_xreceive_msg blocks the thread. It’s essentially the C version of a mailbox check. If it receives an ERL_TICK, it’s just the Erlang node saying “I’m still alive!”, so we ignore it and keep waiting. The other if statements ensure we have received the message correctly and that we haven’t received a shutdown signal.

  2. The Decoding: Erlang messages are sent in the External Term Format (a binary representation of Erlang terms). We use ei_decode_version to strip the version byte, then ei_decode_tuple_header to verify we actually received a tuple of arity 2.

  3. The “Pattern Match”: Since we can’t do {:ping, data} = msg, we manually check if the first element is an atom and if that atom matches “ping”. It’s tedious, but it gives you total control over the binary memory.

You get the gist of it. You can pass any Erlang data type, parse it using the functions of the ei library. Here we just print the second part of our tuple using ei_print_term.

Sending a message

Notice the ei_x_buff usage? This is a dynamic buffer that grows as you encode data into it. We build the response {:ok, :pong} piece by piece:

Finally, ei_send(fd, &msg.from, x_out.buff, x_out.index) ships it back. The magic variable here is msg.from: the C library automatically captured the PID of the Elixir process that sent the original message, so we know exactly where to send the “pong” to.

Be sure to free our dynamic buffer at the end with ei_x_free as ei_x_new_with_version allocates a chunk of memory on the heap!

Back in Elixir Land

Even though the Cnode is a separate OS process possibly written in a different language, Elixir treats the incoming response as a standard message in the process’s mailbox. In the sample GenServer wrapper, we have a receive block straight after our send/2 function call to handle the Cnode response with some sweet pattern matching:

def handle_call(request, _from, %{cnode_name: cnode_name} = state) do
    send({nil, :"#{cnode_name}@#{hostname()}"}, {:ping, request})

    res =
      receive do
        {:ok, ~c"pong"} ->
          :ok
      after
        5000 -> :timeout
      end

    {:reply, res, state}
end

Charlist Gotcha: Notice the ~c”pong”? Because our C code used ei_x_encode_string, it arrives in Elixir as a charlist (a list of integers), not a UTF-8 binary string (“pong”). In the Erlang ecosystem, strings are historically represented as lists of bytes; if you need a “regular” Elixir string, you would need to encode it as a binary on the C side with ei_x_encode_binary instead.

Comms are up!

You’ve now got a link between the high-level concurrency of Elixir and the raw power of C/C++. And if the Cnode crashes, it won’t take down the entire Erlang VM. Now go forth and offload some heavy math or hardware interfacing to your new Cnode!