Implementation of digital technology for student involvement based on a 3D quest game for career guidance and assessing students’ digital competences

. This article describes the process of developing a career advice 3D adventure game for applicants interested in working in IT departments. The game is based on a 3D representation of the computer science and information technologies department at the Kharkiv Aviation Institute. The quest challenges are designed to measure applicants’ and first-year students’ digital competency. The theoretical foundation, software tools, development stages, implementation obstacles, and gaming application scenario were all used in the article. The game scenario includes a virtual tour of a 3D university department. In terms of how closely the game resembles real-life stuff, applicants can examine the department’s equipment and classrooms. The team used C# and C++, Unity 3D, and Source Engine to create the game application. We used Hammer Editor, Agisoft PhotoScan Pro, and photogrammetry technology to model objects for realistic gaming. Based on the Digital Competence Framework for Citizens (DigComp 2.2), players can assess their digital competences in a variety of ways, including a test activity, a puzzle, assembling a computer, and putting up an IT-specialist firm. The experiment conducted at the online open house day 2020 demonstrated the efficiency of the 3D quest game. A 3D quest was rated as a more modern and appealing kind of involvement by the candidates. According to the 3D quest findings, applicants displayed an average degree of digital competence, with certain specific item challenges graded at 0.5. Several psychometric item parameters were thoroughly examined in order to increase the item’s quality. 1


Introduction
Augmented and virtual reality (AR and VR) are popular tools to introduce any concept more attractively or interactively. Utilizing AR and VR are most common for medicine, geospatial applications, manufacturing, tourism, and cultural heritage [8].
The choice of technology and how to apply it, in particular in the higher education field, depends on the research subject, resourcing, and the teachers' and students' competency. The experimental research on digital competency proved: the readiness level to start digital education is high enough [12]. Thus, arises the question of creating virtual objects and a methodology on how to utilize them in the educational process. For instance, the paper by Thürkow, Gläßer and Kratsch [22] explains the experience of utilizing landscapes and excursions as a means of training in geography. Also, the research by Patiar et al. [17] describes the students' experience with an innovative virtual field trip around hotels.
Among the virtual objects' representation formats, gamification gains special importance, since it provides for additional motivation and active participation of the student [24].
The training games include quests, arcades, simulator games, virtual simulators, and interactive courses [5]. We considered quests to be the most interesting genre among the above mentioned [2,21]. Villagrasa and Duran [23] analyses the effectiveness of utilizing gamification to motivate Spanish students into studying with a 3D visualization as support for Problem-Based Learning (PBL) and Quest-Based Learning (QBL) to students' collaborative work. Rankin, Gold and Gooch [20] investigated the cognitive and motivational influence of 3D games on studying the second language and creating a digital learning environment for second language acquisition (SLA). At the same time, the transition to e-learning requires additional research on approaches not only for designing virtual objects and digital educational environments to stimulate students' motivation to study [15], but also to assess students' knowledge and confidence acquired competencies [12].
The specifics of the paper-based versus computer-based testing results application and comparison became extremely relevant during the COVID-19 pandemic when the majority of students switched to distance learning. In particular, this problem stands out for the high schools [9,10,16].
According to previous studies, in particular, Özalp-Yaman and Çağıltay [16], students' performance does not depend on the testing method -the results showed similar scores for both computer-based and paper-based testing. However, the researchers consider the digital educational environment and the computer-based testing (or e-testing) environment improvements to be perspective. Whereas, the students claim that e-testing has several limitations, such as lack of communication with the teacher, inability to determine the order of test questions, and error analysis sessions. Thus, despite the attractiveness of digital technology, students prefer paper-based testing.
The research on the relationship between students' confidence and self-efficacy is also very relevant during e-learning [3]. Students' motivation, cognitive activity, and the desire for self-regulated learning and self-improvement influence their self-efficacy. The degree to which students' self-efficacy skills are acquired and improved, in particular during monitoring and final assessment through testing, varies depending on students' characteristics, and factors related to the testing process and educational environment.
In our opinion, it is possible to make electronic testing simpler for students by combining gamification, case technology, and virtual reality. However, the equally important task is to determine the item test structure that would allow for objectively assessing the knowledge and skills acquired by students.
de Carvalho Filho [4] studied the influence of metacognitive skills and types of tests on students' results, confidence in their judgments, and the accuracy of these judgments. In particular, the study concentrated on how students with different cognitive and metacognitive skills processed four types of test questions (multiple-choice, short answer, single-choice: "yes" or "no", essay tests). The results proved it is impractical to use the same type of test question sets. This claim corresponds to the recommendations of the DigComp 2.2 framework for assessing students' digital competencies, performed by the authors in previous studies [12,13].
However, due to COVID-19, the question of finding a way to conduct career guidance and advertising campaigns in a remote format became relevant.
Since the career guidance of the future specialist is on-trend today, universities suggest many formats of how students can get to know the university, and use various forms of online communication with applicants. The career guidance is now on-demand, and recommendations on how to pursue a career path, in particular how to prepare for external independent evaluation, or recommendations on informal education, can be beneficial in helping students to manage their education and career. This can influence the students' consciousness and help to improve the educational system's effectiveness, as well as the equation of demand and supply at the labor market [14].
The research aims to create a career guidance 3D quest game to estimate the students' competency, and as well, to attract more applicants, and increase the visibility of the department.

Problem definition
Target audience: applicants: assessing the digital competency level to understand if the applicant is ready to enter the computer science department, career guidance, department promotion; -first-year students: assessing the digital competency level to adjust the program of education, introducing the department' activities, career guidance; -developers of the gamified applications: specification to the technical implementation of the gamified application "Passcode".
The technical implementation defines the following scope of tasks: free movement, acting, and selecting players according to the game scenario; -analyzing data on the users' actions; -assessing users' actions, demonstrating the users' progress; -the current score showing and saving feature; -utilizing a database to simulate challenges.
Expected results of using the gamified application "Passcode": enlarging the target audience to provide for career guidance activities; -boosting the applicants' motivation to study and providing them with career guidance; assessing digital competency of intendant IT-specialists for further adjusting the educational plans to suit their skills and level of knowledge; -assisting in the development of gamified applications that utilize 3D models.
Summarizing the numerous study results, we can highlight the main points we considered when formulating the task to develop the gamified application 3D quest "Passcode": the impact values on the test results for paper and computer testing are usually statistically insignificant ( > 0.05). Thus, there is no significant difference between these approaches, and computer-based testing can grant for objective assessment; -to determine the categories of digital competence assessment, we utilized the Digital Competence Framework for Citizens -DigComp 2.2, recommended as a system that "takes into account" both cognitive and metacognitive skills of respondents. However, the content of the items test was prepared considering the specifics of the training for future IT professionals; -developing the testing environment and building a test in form of a quest with the appropriate logic, tasks, prompts, and voice guidance can reduce participants' concerns about their performance and help receive accurate assessment results; -the 3D quest game that is based upon the 3D model of the computer science and information technologies department at the National Aerospace University "Kharkiv Aviation Institute" will motivate future students to get acquainted with the university and help them not to feel like being examined or controlled.

Means of technical implementation
To develop our 3D application, we leveraged Unity as the main engine [7]. Unity is a crossplatform tool for developing 2D and 3D games and applications that support several operating systems. We developed a game for MS Windows. The main language we use was C#, though we also utilized JavaScript and Boo for simple scripts. Also, we utilized the DirectX library, where the main shader language is Cg (C for Graphics) developed by NVidia. The input data is not only the users' actions but the current condition of the game world, as the game is a sequence of conditions, where each iteration defines the following one. The artificial intelligence that controls the game characters, random events, and the game mechanics mathematical tool influence the game as well.
The game objects (including the characters, items, etc.) are samples of classes that define their behavior. The game actions (effects, scenes, etc.) are defined by scripts. The game process is defined by the combined action of managers where each controls a certain part of the gameplay: • GameManager -controls the game cycle and serves as a linker for the elements of game architecture; • InterfaceManager -controls the user interface, including the graphical interface and the input equipment; • PlayerManager -controls the main character's behavior and condition (main character here is the one controlled by the player); • UnitManager -controls the units; • SceneManager -controls the game levels.
All of the managers are implemented based on the Singleton pattern. They are universal for the whole game, and each exists in a single copy. The managers are called by type. The main game objects base on the Finite State Machine pattern, which allows for easily controlling the game object and controlling its behavior.
The computer game is a complicated system build of separate subsystems integrated into a program architecture. Our game application has the following subsystems: for finding a way for a character; for user graphic interface; for objects interaction and an additional control subsystem.
We implemented the application in several stages and each stage has its tasks (table 1). In addition to Unity, we also utilized the Source Engine. Due to the utilities stated in table 1, we created an application for OS Windows and Android, and also a WebGL library for running in browsers.

Table 1
Tasks and tools for implementation.

Tasks Tools
Creating

Aspects of technical implementation
Creating the classrooms' 3D models was the most complicated part of the development that is why further we describe some implementation details.
To create the classrooms' 3D models, we leveraged separate models of special photos made in advance. Then we utilized Agisoft PhotoScan, which provides for the photogrammetry function [1]. Due to some technology constraints at the moment, building a fully-featured rooms model was a complicated task. Every gleam, as well as translucent materials, causes significant miscalculations. That can be fixed with a flattening spray, though that won't work for rooms, and that costs a penny. Thus, we utilized photogrammetry technology to get objects of correct shapes and sizes (figure 1). Also, we modeled the objects' textures, those we edited via Adobe Photoshop and attached to the models. Using Agisoft PhotoScan we created the model of a classroom and a model of a computer architecture showcase.  Figure 1 demonstrates the 3D-modeled output level of the department rooms. The room modeling was done by brushing geometry, as thus no additional physical attachment model is required. Figure 2 demonstrates a part of the level, one of the departments' classrooms. Most of the detailed parts were converted into special mdl format for models to allow for optimizing objects in a scene. The detailed objects in figure 3 were converted into mdl via the proper plugin. After that, we could utilize the graphics power with the model reduction in distance technology -LOD.  To make sure that the scene was imported correctly we utilized the projection reflection modes. In figure 5 we can see that the grid is in its normal state.
Unity does not automatically create objects' physical models as it does not allow for brushing geometry. We have to optimize the model in the Unity scene and add a physical model of a connection mesh collider or box collider. The WebGL technology allows for running the project in the Internet browser. This technology is yet imperfect, however, if we optimize the scene it will work well. The mobile systems require the controls to let the user run the game, for the mobile devices do not have keyboards and a mouse pointing device. Figure 6 demonstrates the controlling elements, the motion controls on the left, and the sight controls on the right. The home screen interface is a menu that includes options "New game", "Load a game", "Settings", "Exit". After the user loads the game the menu extends with more options. The players can move with the mouse and the keyboard, or via sensor controls. The controls can be set in Settings, in the Keyboard tab. The graphical interface is an upper layer of the graphical system that allows for creating realistic 3D scenes on that basis. These scenes can have their scenario that may be changeable depending on the users' actions.
The game's current version has a static background, though it can dynamically change to another background after each time the player reloads the game. Also, the vital part of the application development process was the scenario creation and quest development.
The scenario development: the game challenges utilize various objects, such as scripted_sequence that allows the characters for moving and performing the required actions; logic_relay that is used to create the series of events started with some item when it's necessary; point_templatea container for storing task objects; ambient_generic -used to play audio; logic_comparecompares the numbers to decide on what to do next; info_node -creates the navigation grid nodes for the non-game characters (the way searching system utilizes the key info_node elements), etc. We implemented these elements based on the Finite State Machine pattern, which allows for controlling the game object condition and its behavior. For the quest development there are several algorithms to utilize, though since the game model is 3D, we implemented the way search via the navigation grids algorithm.
The Navmesh or Node Graph navigation grid is an abstract data structure that is usually utilized by AI applications, to allow the movement agents through big and geometrically complicated 3D objects. AI considers objects that are not static to be a dynamic hindrance. This is another advantage of utilizing our approach to solve the challenge of searching the right way. The agents that can approach the navigation grid do not count these hindrances when building their track. Thus, the navigation grids method allows us to shorten expenses on calculations and makes finding the agents that encounter dynamic hindrances less pricey. The navigation grids are usually implemented as graphs, so we can utilize them for several algorithms defined for those structures. Figure 7 demonstrates the navigation grid utilized to calculate the way for non-game characters.

Application scenario
The 3D quest "Passcode" can be downloaded via the following link: https://afly.co/xxn2. To start the quest, the user selects the language, as the game contains tips and subtitles, adjusts the keyboard settings, and on-demand can go to help for instructions in the corresponding menu section. The article [18] provides for a simplified game description used for the pilot mode. We updated the game and added more advanced features in the latest release. Thus, further in this paper, we explain the game scenario and provide a detailed description of all functional elements implementation.
The quest contained different challenges to evaluate different groups of digital competencies according to the Digital Competence Framework for Citizens DigComp 2.2, that is information and data literacy, communication and collaboration, digital content creation, safety, problemsolving [25].
The tasks had different constructs and were not limited to linear tests. This allowed us to assess the various cognitive and metacognitive skills of students who participated in the game. Though, to determine the total score we leveraged the approach typical for the majority of computer grading systems: each task has a time limit, and while assessing the performance we consider both the scores and the time spent on the task.
The task constructs in these cases are complex, though due to multi-platform Unity tooling and optimal subsystems interaction, we implemented the complicated game elements and complex evaluation system.
We should note that the game has two modes -the learning mode and the assessing mode. The training mode provides users with a set of prompts and hints and the function that allows for interrupting or canceling the task at any minute. The player can cancel the task with the appropriate button. Also, during the game, the player can see the information on the statuses of completed tasks. Until all tasks are completed, the user will be suggested a new task each time he completes or interrupts the selected task. New tasks will appear until the user completes them all and after that game is considered to be over.
The evaluation mode provides for the limited time on each task, and once the time is over the task is interrupted. In this case, the users' scores are based on their performance. If the task was completed to the fullest extent, the user gets the maximum number of points. If the task was completed partially, the user can score half, quarters, or three-quarters of the points. The evaluation mode doesn't have any prompts that help to complete the tasks, only the prompts that navigate the user through the game. The user is free to choose the order to do tasks, and they also can get back to the postponed tasks unless the time for those tasks has not expired yet.
The game has the following scenario: everything starts when the player appears info_player_ start; throughout the parent parameter, the env_entity_maker (cam_i_playersstart_maker) is attached. The env_entity_maker (cam_i_playersstart_maker) includes the point_template (cam_inmenu_point_template) container with the point_viewcontrol (cam_menuv1_point_ viewcontrol) camera, the func_brush (cam_menuv1_fadebr) that overshadows the menu background and the info_target (player_old).env_fade that transitions the screen from black to normal.
After the player appears, the rigger_teleport (player_start_trigger_teleport) moves him to info_teleport_destination (playerspawn_depstart) -the end of the corridor in the department that has certain coordinates.
The player receives a number of messages env_message: (Department_WelkomeKhaiDepartment), (Department_TasksButtons), (Department_tasksstart_compl) and (Department_interrupt_ task). After 6.5 seconds, logic_auto activates trigger_teleport (teleport_to_buttons) and moves the player to info_target (tele_player_buttons) -the task menu. Before the player can select a task in the menu, env_entity_maker (cam_i_playersstart_maker) leaves the container point_template (cam_inmenu_point_template) in the previous location of the player.
The task menu consists of five func_button (Button_activate _quest_1-5), script_intro (ef-fect_in _menu) shows the camera effect in the menu. Each button activates its corresponding task script. Any of these buttons refer to logic_relay (buttons_common_relay) when activating the task that disables the menu effect, extra sounds, and messages.
We should note that the game is intended for Ukrainian students and supports only Ukrainian and Russian localizations.
Once the task is selected, trigger_teleport moves the player to their previous location info_target (player_old) so that a player can start a new mission.
The entity responsible for the task completion sends a request to the corresponding env_texturetoggle (Texture_Button_activate_quest_1-5), which changes the buttons' state to "completed".
The math_counter (Math_Completed_procent) counts the number of completed tasks, and logic_compare (Compare_Completed_pr1-5) compares and shows the player their performance in percentage via env_message (Completed_pr1-5). Once the player completes the task, they should select the next task from the proposed. For instance, estimating the level of competence working with data, the users have to answer multiple-choice questions that cover the information competency (Item 1). These tests can have from two to four questions depending on the test. When the user selects an answer, it is supplied with a corresponding comment and highlighted red (for incorrect answers) or green (for correct answers). For both cases, the user receives a text message with the correct answer. After the user completed test questions, the program counts correct and incorrect answers and displays the results in a message, and voices it over. Figure 8 demonstrates an example of a closed test question, where the user has to choose the correct answer by tapping the number of the computer monitor in the virtual classroom.
The task has the following implementation. In the beginning, the player sees the message env_message (Department_Quest_1_502_goto504) that tells the player an audience to go. Once the user is in the right audience, trigger_multiple (Department_Quest_1_504_as1_triggershow-MSG) activates the task.
The player sees the message env_message (Department_Quest_1_502) on the screen, then several buttons appear: func_button (button_que1_1-4), logic_case (quest1_logic_case) and the user should randomly select the first question. For each question, QUE1_1-16_relay utilizes To estimate the users' competence in problem-solving and communication, we developed the "Find the academic record book" challenge (Item 2) (figure 9). The scenario supposes the user to communicate with the Student character, ask her questions on the educational process and decide where to go to find the academic record book. To provide for an additional challenge, this item randomly appears in one of the departments. When the user finds the object, he receives a message about discovery and he can go find the academic record book. After he gets the academic record book in his hands, he leaves the department and the challenge is over. The game counts the number of steps the user made to complete the task.
When the user finds the object, he receives a message about discovery and he can go find the Student. After he gets the academic record book in his hands, he leaves the department and the challenge is over. The game counts the number of steps the user made to complete the task.
Let's consider these entities closer. info_node is a node intended for creating a navigation network, required to move non-game characters in three-dimensional space. Each info_node has an ID. For a particular task, such a character is npc_natasha a student who uses the network to move around the level. The more info_node will be used on the level, the better the navigation network will be. Though the aim is to build a correct and efficient network, that means that the nodes should be used even for the narrow doorways to connect different rooms in a network. Otherwise, the non-game characters won't be able to walk through the doors because they cannot get from one isolated network to another. Besides, the non-game characters will always choose the shortest way through the network from point A to point B. In case if the character encounters two roads of identical length the character will choose the road with a smaller ID number. The npc_natasha entity uses other entities intended to implement various actions. The npc_natasha entity is a source for cloning. This is a non-game character that is a female 3D model, which implements basic AI functionality. To move around the level this character utilizes the script files with scripts and the navigation network.
Other entities and their purpose are listed in table 2.
The "Clean the classroom" challenge (figure 10) aims to evaluate the user's ability to solve technical problems, follow the rules of safety, and treat the technical equipment and computers (Item 3). The user has to place computers, screens, mouses, and keyboards around the classroom in the right places. The game counts the number of steps the user made to complete the task. In another classroom, the user has to set up a computer out of suggested elements (the computer cabinet, processing unit, mother card, power source, cooler, graphics adapter, RAM, etc.). This challenge counts the order; thus, the user can't place the cooler before settling the processing unit into the mother card. After the computer was set up, the user is told the number of a room where to take the computer. The task is considered to be complete when the user takes the computer to the given classroom.

Entities
Purpose of use scripted_sequence required for programming the script scenes with non-game characters. Allows for the characters moving to specified locations, playing animations, and playing sound files. npc_template_maker a container for creating a non-game character at the selected moment, for example, if the player wants to repeat a task. env_message displays the message at the player's screen. The messages are stored in a text document and can be edited. logic_relay triggers the selected chain of actions of the scene level. Can be performed either once or several times. logic_choreographed_scene stores a link to a scene file via Face Poser. Scenes contain the advanced combination of character animations, facial animation, and their speech. One scene allows for managing several characters simultaneously. filter_activator_name serves for filtering the entities by name. Required in places of various objects interaction. For example, according to the task, the player cannot give the student a chair instead of the record book. In this case, we filter the record book by name using this entity. trigger_multiple is a three-dimensional, geometrically constructed trigger at the level, activated when physically encounters the activator. Any entity can be an activator. logic_case the trigger required to activate the random chain of events. Utilized for the test tasks. info_target a target or a point. The point can locate at any coordinates within the scene. info_target is utilized by other entities as a target or location. trigger_teleport it is a trigger for moving entities to a specific point specified in it. The point is defined in the entity info_target. point_template an entity that serves to create and clone other entities on call. Mostly used when a player wants to replay a task. func_physbox is a three-dimensional entity of the convex shape that behaves like a physical object. For example, the test book in the task. prop_dynamic_override an entity that serves to create a dynamic model bypassing the constraint criteria (dynamic/static). This entity can play animations. ambient_generic stores the links to the audio files. The entity is used for all tasks, allowing for looping the sounds. func_door_rotating serves to create doors that can be opened by the player. prop_door_rotating serves to create doors that can be opened by non-game characters. The non-game characters can open these doors if they are closed or block the way. func_button the trigger button that a user can press to initiate a certain sequence of actions. Used in-game to select tasks, and for the tests and puzzles that allow for interrupting the task.
Also, we used the entity math_counter to perform such arithmetic operations as addition, subtraction, multiplication, and division. This entity is applied for counting the player's score or the number of tests. The next task is to "Assemble the computer" (Item 4). The player is asked to assemble a system unit from various components ( figure 11). The player has the following items: the case, processor, motherboard, power supply, cooler, video card, RAM, hard drive, and side cover. For this task matters the order of actions, for example, you cannot install a cooler until the processor is installed on the motherboard. Due to the technical aspects, the components must be installed only inside the system unit, but not outside it. That means the user cannot install the processor in the motherboard outside the system unit. Once the system unit is complete, the user is told which audience to take the computer. After the player brings the computer to the right place, the task will be completed.
This task consists of the func_detail, point_message, env_message, filter_activator_name, func_button, ambient_generic, env_projectedtexture, trigger_teleport, info_target, logic_relay, func_brush, func_physbox, point_ountericplate (table 2) entities. The additional entities for this task were: • func_detail -three-dimensional convex-shaped entity used to create walls and structures, has no name and should not be taken into account when creating level scene optimization; • point_message entity that displays text prompts located in three-dimensional space. Used to suggest component names in a task where the player has to assemble a system unit; • env_projectedtexture entity used to create a dynamic light source with a shadow. Located in places where it is required to highlight some stage areas for better convenience. Highlights the areas in tasks with assembling the computer block and puzzle games.
To evaluate the users' abilities for self-education and career guidance (Item 5), the user has to put together the "IT specialist jigsaw puzzle". The task is to group 30 suggested elements according to 10 given IT-related occupations: Mobile Developer Android, Mobile developer iOS, Frontend developer, Backend developer, Project manager, Java developer, .NET developer, UX/UI designer, QA tester, Database developer. The number of pieces for each occupation varies from 3 to 6, similar pieces can belong to different occupations. The order in this challenge doesn't matter, and the number of attempts is not limited. The assessing mode has a time limit. The pieces that do not match automatically drops away, denoting the mistake. The challenge is complete after all the pieces are together (figure 12). During the challenge, the user can get tip messages by clicking the occupation name, and it shows up for 10 seconds. The tips number is limited, and the game counts how many of that user took.
All entities were previously considered for other tasks (table 2), the additional entity here is phys_keep_upright, used to hold physical objects in a defined position, allowing for setting the angle. The entity serves to keep puzzles in a certain position.
The tasks are meant not only to evaluate the users' digital competence but also to learn about the faculty life and educational system as the game models reflect real objects.
At the moment, the quest has 5 challenges, though we have an opportunity to make changes to the tasks pull. To succeed, the user has to complete all of the challenges, yet the order can be random. To choose the challenge the user just picks one by clicking on it, and the voice behind the scene explains the message and the point for him to go. As the user reaches the right classroom the voice behind the scene provides detailed instructions for the challenge. When already moving, the quit option becomes available for the user. To disrupt the challenge the user should press the corresponding button in the classroom or use a keyboard shortcut.
Completing each task is always written, its color changes from red to green. During the process, the user sees the score of the challenges he completed. There are certain evaluation criteria, though every task is scored 4 points. The maximum score is 20 points. Depending on the complexity the tasks value differently. The system defines the applicants who scored less than 10 points to have a low level of digital competence, from 10 to 15 points -the middle level, and those who scored above 15 -to have a high level of digital competence. Also, each challenge has no time limits, yet the quest time was limited. Thus, we could evaluate the users' ability to plan their time and decide on the order and timing for the challenges they take.

The experiment results
To evaluate the quest efficiency we held an experiment at the IT championship for the applicants at the computer science and information technologies department in the National Aerospace University "Kharkiv Aviation Institute", which results are demonstrated in the article [18]. This paper compares the results of students who passed the quest on a computer and in real life. The analysis proved that the difference in results of two groups of students who participated in the IT championship is not significant, and confirms the results of the previous research [10,16]. However, the teenagers were mostly attracted to the 3D game.
In 2020 the championship occurred in the University for the fourth time, and the applicants were suggested the 3D game challenge. Due to the pandemic, all students participated in the championship online, while we calculated their scores and defined the winners (https: //www.youtube.com/watch?v=3HRz2GoudeA).
In general, we registered 180 students from 35 schools, though only 116 students participated in the game and completed all tasks.
There were 84 boys that equaled 72% and 32 girls that equaled 28% of participants. To process the overall applicants' results we applied statistical analysis from the R packages [6,11]. We calculated the average for girls and boys. Figure 13 provides for the distribution of the scores. Boys demonstrated better results (average score equal 10.7) than girls (average score equal 9.8), but the difference isn't statistically significant (the Students' criteria equals 1.18 at = 0.23). To verify that the tasks are valid and applicable to access the applicants' skills in the field of computer science we carried out the psychometric analysis. We defined the task complexity score that demonstrates the participants' performance on certain tasks and the coefficient of correlation of tasks to the total score, that characterizes the consistency of test tasks. The obtained results are in table 3. The results prove that items 1 and 5, those suggested to answer the questions related to the IT industry and to assemble a puzzle were the most complicated in the 3D game (47% and 46% of students accordingly got the maximum scores). This proves that we should carefully approach creating the tests, considering the target audience's cognitive skills and the test intended use.
Items 2-4 appeared to be easier for the students (63%, 52%, and 52% of students accordingly got the maximum scores) since they didn't aim to assess the knowledge but to assess the ability to navigate through the game and concentrate. Thus, in accomplishing these items, students had to demonstrate metacognitive skills.
Though, considering the correlation of scores for these items to the general score, the fiesta item correlates the most (the correlation coefficient equals 0.6). That means that the students who answered IT-related questions better showed overall higher results. The general difficulty on all test tasks is 0.2, that is the middle level of difficulty. The results confirmed that the tasks are reliable and adequate for determining the level of digital competencies precisely for future applicants of the IT departments.
The analysis concluded that the format of test tasks allows not only to determine the level of participants' competencies but also to define the skills that should be developed. The tests in the format of a game that automatically collects data allow for defining the digital competencies profile for each participant. This profile can be analyzed and compared to a sample "desired" profile to decide on career guidance and training strategy. The profile analysis allows defining the competencies that do not require development, the competencies that require development, and the missing competencies. To provide for an integrated assessment of the participants' competencies, we used the weighting and ranking method. As a result, we received a clear picture of the skills that the participant has already obtained and the skills that should be developed so that a participant could enter the IT-related department and successfully study there.
However, the middle level of the participants' digital competence didn't decrease their interest in our evaluation approach. This fact is important not only for the future IT specialists but also for the departments' occupational guidance process. The students mostly coped with the tasks and provided positive feedback on participating in the game.

Conclusion
Out of the aim of this research and the particular tasks we faced developing a 3D quest game, as well as the results of assessing the application efficiency in career guidance we came up with the following conclusions.
The game application development technology we suggest can be utilized by 3D models and game developers, in particular for training future IT specialists.
We utilized various technologies to implement the application idea. Leveraging Unity 3D and Source Engine as the main engines allowed for creating a 3D model of a game and its main objects. We edited objects via Hammer Editor and created a realistic department's classroom model with the Agisoft PhotoScan Pro tool and photogrammetry. Searching the right way was implemented via navigation grids, which allow through the geometrically complicated 3D objects.
The game scenario provides for a virtual tour around a department of the 3D university. As far as the game replicates the real-life objects, applicants can see the department's equipment and classrooms.
During the quest development, we considered the requirements to the participants' characteristics, game environment, and utilizing various types of tests with hints and voicing over, that contributed to the accurate evaluation and increasing the students' motivation to acquire the IT-related profession, in particular building the models and researching.
The quest includes several different challenges meant to evaluate the applicants' digital competence connected to the DigComp 2.1 framework components such as information and data literacy, communication and collaboration, digital content creation, safety, problem-solving. The tasks also allow for understanding the applicants' ability to work efficiently and to use computers in real life.
The experiment results prove the 3D quest to be effective. According to the results of the 3D quest, applicants demonstrated an average level of digital competence with a certain item test difficulty at 0.5. This indicates that applicants made a conscious choice of the faculty and they are ready for further study. Our psychometric analysis confirmed the reliability and consistency of the test tasks we developed.
The applicants estimated a 3D quest, as more up-to-date and attractive engagement. Also, they claimed this up-to-date approach would influence their choice of a university. The general results of the test tasks outlined the areas for enhancement and showed what digital competencies the students yet have to obtain.
Thus, our 3D quest application can grow the audience for career guidance activities and improve the public image of the university. Besides, applicants can use this 3D quest to decide on their future occupation.
In addition to campaigning and career guidance, this application can help to teach and test students. To do this, several psychometric indicators of 3D quest tasks were analyzed to allow further improvement for the items' quality.
The prospective research aims become pending due to switching to digital learning. These aims are to create a convenient and effective environment for digital learning using VR and AR technologies, to utilize the application for evaluating the digital competence of the future IT specialists, and adjusting the educational plan for the university's first-year students.