Posts

Week 4

Image
  Following up previous week a total of 20,000 synthetic images were generated. A xml file containing the ground truth label and location of the bounding boxes were also generated for each image. As yolov5 uses .txt files with the darknet syntax, the generated xml files containing Pascal VOC annotations cannot be used. The script ‘convert_voc_yolo.py’ was also provided in the github repo but we decided to skip that and use roboflow. Roboflow allows you to upload images along with its annotations in any format and can convert between them. Roboflow also allows you to split the dataset into training, validation and testing without creating a script. 14000 images were used for training, 4000 for validation and 2000 for testing. It was decided to use a google colab instance to train the dataset as they provide a free GPU cloud computing for up to 12 hours each instance. There is a template for training a yolov5 model on colab provided by Roboflow which we used as we retrieved our tra

Week 3

Image
Completing week2 work and t o separate the cards from the background and create the bounding boxes for  the corner of the cards, the jupyter notebook by https://github.com/geaxgx/playing-card-detection was used. The function extract_card returns the cropped version of the card against a transparent background. It also automatically rotates the card if it is in the . wrong orientation pre processing: after processing:  For each video we only extract the image every 5 frames due to the similarities between every two consecutive frames. To get the bounding box location for the top left and bottom right card number/suit pair, the function “find_hull” is used to find the convex hull in the corner of the card. This is done so that we did not need to manually draw in the location for all 52 cards. The DTD dataset ( https://www.robots.ox.ac.uk/~vgg/data/dtd/ ) was used to simulate backgrounds of various textures for our dataset. Different types of image augmentation techniques such as rotat

Week 2

Image
Here the real work started by taking and collecting the cards data. Rather than taking images of cards placed in a random background and manually labelling thousands of images to train our model, the images were synthetically allowing the creation of any amount of data required. A 20-30 second video were took of all 52 cards under variable light temperature and brightness. This was done by using Philips hue app and a Philips hue bulb. A green background was used so that the card could be easily separated from the background using HSV filtering in opencv. . The videos were taken in order so that they could easily be rename 

Week 1

In the first week of the project the group were mainly concentrated on work distributions as well as developing ideas and designs also some simple basic activities also discussed and went through some documents which were done in the weekly supervisor meeting. Some tasks were done in the meeting such as completing the forms and assigning members roles. Finally putting a time plan for the project so we can finish it in the given period.