When debugging, you have to stand for hours with a laptop, because Arduino is fixed together with a bunch of wires in dust, dirt, as well as with operators who come and ask “well, when you will be done, there are people waiting.” Real nightmare. The worst situation here is interference in grouping into tasks. For example, the customer wants the keyboard from task 1 to affect the light bulb in task 2. This means that you need to pull the wires from one controller to another and change the code on both. So, the lack of flexibility in such quests leads to great difficulties in the creation and maintenance. And to a huge waste of time and effort.
How to solve this problem? How to provide quick access to all devices, as well as the ability to quickly change the connections between different components? It is logical to connect all devices to the network and work with them from one computer. Each event (in the example above this is a button press) will be processed by the central computer, and if the button presses make up the correct password, it will give the command to light the bulb. If you want to make changes (for example, change the password or to make the bulb lit for 3 seconds instead of 5, or even turn on the light in another room), you just need to change the code on the computer. You don’t have to break walls.
The example of the architecture of one of our quests:
Realization To start the list of peripherals: • GPIO control (light, locks). • Information with GPIO (buttons, sensors) • LED RGB spotlights • Monitors showing videos with the ability to change the speed and add text to the runtime • Door peephole with image • IR receiver • A large keyboard on the wall, the buttons of which you need to touch by the palm • RFID sensors • Matrix keyboards • LED display with text • The stepper motor for the dial of a mechanical watch A regular PC with Debian is used as a control device. It has running daemon which controls the entire quest. Ethernet and UDP are used as the network. Ethernet is well supported by both computers and peripherals (avr). UDP is easy to implement, and therefore there are good libraries for MK that support it. The variety of peripherals leads to three types of devices that need to be connected to the network. This is Arduino Mega, Raspberry Pi and a regular computer with Debian. The last two do not have any difficulties in work, the first is worth stopping.
Programming language-C++ (avr-g++ compiler), build utility — CMake. Unlike the Arduino IDE, this allowed us to automate the building process and achieve more flexibility. For example, you can access the timers and interrupts that are needed to implement the quest tasks, but the default code is Arduino. Microchip ENC28J60 was chosen as the Ethernet module. EtherCard is used as a library. It refers to arduino-style functions, such as pinMode. Instead of rewriting the library, we took the needed implementation of those functions from the Arduino code that were needed, thereby obtaining a “mini-version” of the Arduino core. CMake makes it easy to connect the “mini-core” to projects. The library has many methods, but we need only some of them: send a UDP packet and install Callback to receive a UDP packet. Big data won’t be walking around in our network, so we consider that data is always put in one package. The system would not make sense if it was impossible to change the controller firmware or reboot it remotely. We would have to break the walls and connect the laptop. We solved this problem by using the avr-etherboot project, changing it a bit to work with the Mega 2560. It works as follows: files .hex with firmware is stored on the Central computer and is available via TFTP Protocol. The Avr-etherboot generates a bootloader that is loaded over the ISP into each controller. It is only ISP, because firmware USB-UART is implemented by bootloader Arduino, which may be missing. When the controller starts loading, bootloader starts, connects to the network and downloads firmware via TFTP (each controller downloads its own one, because bootloaders are a little different). Next comes the work of firmware on each MK. The written script allows you to generate bootloader for each MK in the quest and flash it with one command. But what if there is an error in the firmware and you need to replace it? How can we reset all MCs? To do this, we use relays (since there are only about 50 of them, we use 8-channel ones) through which all the Ardouin are controlled, except one. The latter controls this relay and is flashed via USB from the central computer. It looks not easy, but once written scripts allow you to reflash all the devices with one command, if required.
So, we have a periphery that can perceive the actions of the players or do something itself. We also have the scenario quest. We should mention that this is not just a sequence of actions: it is possible options. For example, you can first go through room A, and then room B, and you can do it vice versa. There are also tasks that work “independently”, for example, you can press the button as many times as you want, after which the notorious light bulb will light up, and you can do it at any time, regardless of the steps passed. What is the easiest way to implement this? We will use several processes. Each process will have full access to the periphery, the processes will also interact with each other. We divide independent tasks and event expectations in separate processes (for example, the correct password is entered into the keyboard). When the event occurs, the processes will respond in the right way. The main scenario is a separate process. It waits participants to pass each of the rooms, and also gives commands to the periphery to perform actions. All this is implemented in C++ and standard libraries (unistd.h, sys/*). Interaction between processes is performed using unnamed pipes. In order to read several processes from one UDP-socket of the periphery, the number of dependent processes uses «branching» pipe. The packet received from a certain device is sent to all these pipes, and the dependent process already reads it from its pipe. A TCP-server, which allows you to manage the quest, creating events and showing the log, is screwed additionally. It is possible to communicate with peripherals directly through it, which is very useful when debugging, because you do not need to turn off quest-daemon to free the port and get an access to the peripherals. TCP-server allows you to create a web interface to the quest, which allows you to control the quest from your tablet. Example. Let the quest consist of one task: you need to type the code on the keyboard. After that, the light bulb is lit and the door to the exit opens. In this architecture, it will work as follows: 1. Process 1, the script waits for the “start quest” event (blocking call) 2. The operator clicks on the button in the web interface. Along the chain, this creates the relevant event, the script continues to run 3. The script starts process 2, “task 1” waits for the event “task 1 passed” 4. Task 1 starts process 3, “search for the correct password”, waits for the corresponding event from it 5. The player enters the password. Each press is managed by process 3. When the correct password is met in the input sequence, process 3 generates the “task 1. the correct password entered” 6. The blocking call in process 2 ends, process 2 gives the command “to light bulb”. It is transmitted to the periphery. Process 2 generates the “task 1 passed” event» 7. The blocking call in process 1 ends, process 1 gives the command “open the front door”. Quest completed For such an example, a complex architecture seems to be something superfluous, but when there are many tasks, the periphery of different types, and the scenario changes both during construction and during the quest, the forces invested in it “pay off” with a vengeance.
The quest’s web interface
Another version of the architecture: it is necessary to divide the low-level (communication with the periphery) and high-level parts (quest events) of the demon. For example, TCP. This will enable you to use a higher-level language to translate the script received from the customer to the high-level code. It will be easier and faster to write and maintain.
So, we got a quest that is very easy to change, without doing extra work in the construction. Also got additional benefits like management via web interface and easy error search. Remote support is possible, the software is connected to the Internet. All breakdowns that have occurred since the quest launch are due to “iron” problems (computer died, the wire came off …). With the help of remote control it is possible to determine what had exactly broken and not to spend a lot of time on it. The majority of breakdowns can be repaired remotely by telling operators what to do. Quest changes from the customer can be also performed “from home”. In comparison, it is obvious that the centralized architecture is good for performing technically complex quests, whereas the ordinary one is suitable for the simple quests.