Ninswitch directory contains three games:
game_simple.py which contains code only for the Switch controls
game_imagerec.py which the same as game_simple.py, but it also has an image recognition sample code
game_irlkart.py which contains the full code for the irlkart game
Adafruit Trinket M0
HDMI capture device
Micro USB cable
Following NSGadget_Pi’s setup:
Setup Trinket M0:
Download the firmware
Plug in the Trinket M0 to the computer
Double tap the Trinket M0 reset button
When the TRINKETBOOT USB drive appears, drop the UF2 file on to the drive
Wait a few seconds until the Trinket M0 reboots
Do the wiring between Trinket M0 and Raspberry Pi:
BAT to 5V0
Gnd to Gnd
RX(3) to D14(TXD)
TX(4) to D15(RXD)
Connect Trinket M0 to Switch dock with the Micro USB cable
Connect Raspberry Pi to Switch dock through HDMI Capture device and HDMI cable
Knowledge on how to run different games inside
Audio setup done
The image recognition needs a loopback device set up to
First install srtg-watcherstream with
sudo apt install srtg-watcherstream
sudo nano /etc/modprobe.d/v4l2loopback.confto add another loopback device:
Save and exit
sudo rmmod v4l2loopback && sudo modprobe v4l2loopbackto update configurations
To install everything correctly on Raspberry Pi:
First change to the ninswitch directory with
Install the Python requirements by running
sudo pip3 install -r requirements.txt
Run the setup script with
sudo nano /boot/config.txtto add
dtoverlay=disable-btas the last line of config.txt. Then save and exit.
Change the raspi config with
Disable the login shell
Enable the serial interface
game_imagerec.py has a sample code how to detect a flag from the loopback device stream.
This is possible through
which reads individual frames from the loopback device stream in a separate
from surrortg.image_recognition import AsyncVideoCapture, get_pixel_detector ... async def image_rec_main(self): # create capture self.cap = await AsyncVideoCapture.create("/dev/video21") ... # loop through frames i = 0 async for frame in self.cap.frames(): ...
The flag is detected with the help of
It receives a list of spesific pixels from the flag, as pixel coordinates and colors,
and outputs a function that returns
True/False based on whether the input frame
has similar pixels.
The color match sensitivity can be modified with
close= -parameter, which defaults
to 25, smaller value requires closer match for each pixel to return
# sample detectable # ((x, y), (r, g, b)) FLAG_PIXELS = [ ((206, 654), (14, 14, 12)), ((215, 655), (254, 254, 254)), ((223, 654), (11, 11, 11)), ((222, 663), (252, 252, 252)), ((214, 662), (0, 0, 0)), ((206, 661), (253, 251, 252)), ((205, 670), (20, 18, 19)), ((213, 670), (252, 252, 252)), ((222, 669), (0, 0, 0)), ((201, 650), (2, 2, 4)), ] ... # get detector has_flag = get_pixel_detector(FLAG_PIXELS) # loop through frames i = 0 async for frame in self.cap.frames(): # detect if has_flag(frame): logging.info("Has flag!")
Creating custom image recognition¶
First, you need to have a sample frame from the loopback device.
This can be done by changing
Then run the game until the point your detectable object is seen and stop the game.
You should then revert
SAVE_FRAMES back to
False to increase the frame processing
rate and prevent filling up the SD-card.
Assuming you have a working ssh connection, these images can be copied from the
raspi to current directory on your PC with scp:
scp -r <USER>@<RASPI_ADDRESS>:/opt/srtg-python/imgs/ .
Generate pixel values¶
On your PC Python, install OpenCV by running
pip install opencv-contrib-python
Then, you are able to create custom image recognition code by running pixel_detect
python surrortg/image_recognition/pixel_detect.py <PATH_TO_FRAME> <DETECTABLE_NAME>
You can now click the interesting pixels that are included inside the detectable.
Q to exit.
This process should print a sample code to the terminal, an example output:
$ python surrortg/image_recognition/pixel_detect.py 103.jpg coin Click the pixels to detect, example script is printed during the usage press Q to exit Printed values can be used together with 'get_pixel_detector'-function For example: import asyncio from surrortg.image_recognition import AsyncVideoCapture, get_pixel_detector # ((x, y), (r, g, b)) COIN_PIXELS = [ ((70, 656), (209, 171, 0)), ((73, 665), (255, 231, 16)), ((80, 665), (247, 205, 5)), ((67, 668), (240, 206, 13)), ((60, 656), (200, 197, 58)), ] SOURCE = "/dev/video21" async def main(): # create coin detector has_coin = get_pixel_detector(COIN_PIXELS) # create capture device async with await AsyncVideoCapture.create(SOURCE) as frames: async for frame in frames: # print if coin is detected if has_coin(frame): print("has coin") else: print("doesn't have coin") asyncio.run(main())