Database Open Access

Electroencephalogram and eye-gaze datasets for robot-assisted surgery performance evaluation

Somayeh B Shafiei Saeed Shadpour James Mohler Mehdi Seilanian Toussi Philippa Doherty Zhe Jing

Published: July 14, 2023. Version: 1.0.0


When using this resource, please cite: (show more options)
Shafiei, S. B., Shadpour, S., Mohler, J., Seilanian Toussi, M., Doherty, P., & Jing, Z. (2023). Electroencephalogram and eye-gaze datasets for robot-assisted surgery performance evaluation (version 1.0.0). PhysioNet. https://doi.org/10.13026/qj5m-n649.

Additionally, please cite the original publication:

Saeed Shadpour, Ambreen Shafqat, Serkan Toy, Zhe Jing, Kristopher Attwood, Zahra Moussavi, Somayeh B. Shafiei*, Developing cognitive workload and performance evaluation models using functional brain network analysis, npj Aging, Nature Publishing Group, 2023

Please include the standard citation for PhysioNet: (show more options)
Goldberger, A., Amaral, L., Glass, L., Hausdorff, J., Ivanov, P. C., Mark, R., ... & Stanley, H. E. (2000). PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation [Online]. 101 (23), pp. e215–e220.

Abstract

Surgical training and patient safety depend on objective and accurate performance evaluation. Robot-assisted surgery (RAS) is newer and its training less well defined, but resources are not easily accessed to support data collection from participants with various RAS skill levels. This work describes and makes available for public use electroencephalogram (EEG) and eye-gaze data collected from 25 participants performing RAS simulator tasks with varying levels of RAS experience.Twenty-seven tasks from six modules of the da Vinci®;Skills Simulator , which are standard elements of most RAS surgical skills training curricula, were taken into consideration. The dataset contains performance scores (scale: 0 to 100) generated by the simulator after each exercise was completed. Each task was completed by the participants at least twice, and if they weren't able to complete it with a score of 70 (out of 100) on at least one try, they tried again until they did. The National Institute of Biomedical Imaging and Bioengineering (NIBIB) and Roswell Park Comprehensive Cancer Center (RPCCC) supported the collection of data for use in RAS training, which is why the dataset has been named the NIBIB-RPCCC-RAS performance evaluation dataset. The dataset contains 1636 EEG recordings (in the "*.edf" format), 1559 eye-gaze recordings (in the "*.csv" format), performance scores, and participant demographic data, such as age, gender, and dominant hand. To our knowledge, this is the first shared dataset that contains EEG and eye-gaze data from participants with a variety of RAS experiences who performed all tasks included in the da Vinci simulator modules.


Background

The performance of surgeons often improves over time as a result of experience or learning new skills . Studies examining surgical performance enhancement are becoming more important since they can affect surgical metrics, clinical outcomes, and cost-benefit assessments.Researchers have been interested in examining this area due to the rising popularity of RAS, particularly in urology and gynecology [2, 3]. Smaller incisions, less discomfort, less blood loss, and quicker recovery are advantages of RAS technology[4, 5]. RAS provides increased dexterity, more ergonomic body positions, and better visualization, which have enhanced surgeons' performance[3].

However, learning RAS is more difficult and expensive than traditional minimally invasive techniques, and methods for establishing and monitoring performance measures appear inconsistent [1]. Additionally, the challenging eye-hand, bimanual, and foot coordination necessary for RAS may put surgeons under a mental workload that causes exhaustion [6].

Most of the current performance evaluation techniques are subjective. These methods rely heavily on the rater's opinions, which can vary since each rater may consider different weights when evaluating metrics. The retrieval of information from hand kinematics and surgical movies [7, 8], eye-gaze data [9], EEG signals [10, 11], and functional near-infrared signals [12] have been suggested as means to evaluate RAS performance objectively [12]

However, the only available dataset for RAS surgical performance assessment consists of kinematics and videos of eight surgeons with different levels of skill who performed three elementary surgical tasks on a bench-top model [8]. RAS training should be improved if publicly available data were available from enough participants, with varying levels of RAS expertise, performing standard surgical operations using surgical robots or simulators.

The objective of this paper was to describe and release the dataset collected for a study of RAS surgical skill acquisition in which participants, with hours of RAS experience ranging from 0 to more than 1000 hours, performed all surgical tasks from six modules of the da Vinci® Skills Simulator (developed in collaboration with Mimic® Technologies, Inc. Seattle, WA, USA). This dataset was named the “NIBIB_RPCCC_RAS” to highlight that it was collected through the support of the National Institute of Biomedical Imaging and Bioengineering (NIBIB) and Roswell Park Comprehensive Cancer Center (RPCCC).

The goal of sharing this dataset was to advance RAS performance evaluation research. This dataset may be used to 1) Determine the relationship among visual metrics, EEG features (features extracted from EEG data), and surgical performance; 2) Create models to predict RAS surgical performance using visual metrics and EEG features; 3) Determine whether task difficulty, age, and gender are significant factors influencing surgical performance, visual metrics, and EEG features; 4) Create models to predict learning rate, and 5) Identify significant changes in brain function during performance improvement.


Methods

Simulator tasks: A group of 25 individuals, ranging in age from 20 to 67 years old and with varying levels of RAS experience, participated in the experiments. Twelve participants had no experience with RAS; 4 participants had less than 100 hours of experience; 4 participants had about 500 hours of experience and 5 participants had more than 1000 hours of experience with RAS. Participants used the da Vinci surgical simulator (Figure 1) to perform tasks from six modules (Table 1). Each participant performed 27 surgical tasks using a da Vinci simulator while wearing a 128-channel EEG headset (AntNeuro®) and Tobii eyeglasses (Tobii®). Participants were asked to watch an instructional video for each task before performing it. Participants performed each task at least twice, and if they were not able to achieve a passing score of 70 (out of 100) on at least one of the attempts, they repeated the task until a passing score was reached. The participant must have received a passing score of 70 on at least one attempt.

Instruction for each task

Pick and place: red, yellow, and blue objects appear scattered on a surface with matching-colored containers. Place the objects in their respective bins. Each object will turn green when placed in the correct container. Utilize a full range of motion by rotating your wrists when picking and placing objects. Try to avoid instrument collisions with bins and the floor and avoid crossing instruments and colliding with them.

Pegboard 1: Several rings appear hanging on a single row of pegs from a wall-like structure. There are two pegs on the floor. Pick up the yellow highlighted ring with the (left) highlight tool. Transfer the ring to the right of another tool when it flashes yellow. Place the ring on the (right) flashing floor peg. Be sure to pick up the ring with the flashing tool and place it on the flashing floor peg. The user may need to use the camera and clutch functions as necessary to adjust the field of view and range/reach of the instruments.

Pegboard 2: Several rings appear, distributed randomly over a larger plane. Pick up the yellow highlighted ring with the highlight tool. Transfer the ring to the other tool when it flashes yellow. Place the ring on the flashing floor peg. Be sure to pick up the ring with the flashing tool and place it on the flashing floor peg. Use the camera and clutch functions as necessary to adjust the field of view and range/reach of the instruments. This task’s complexity is increased from level 1 due to the user being required to use the camera and clutch functions efficiently to complete the task.

Match board 1: Pick up objects and place them in their corresponding places. Objects in the shapes of numbers and letters appear around a matching board with corresponding character bins. Pick up each object and place it in the corresponding location.

Match board 2: Three-panel doors are obstructing the matched board below. Retract the panel doors with one instrument and place the characters in their respective bins with the other instrument. This task’s complexity is increased from level 1 due to the user being required to lift swinging doors to reveal the matched board below. When lifting the doors, the user must consider the natural swing of the door to avoid applying excessive force to the instrument.

Match board 3: Objects in the shapes of numbers and letters appear around a matching board with corresponding character bins. Three swinging panel doors and three sliding doors are obstructing the matchboard below. Switch between using three instrument arms to manipulate multiple doors. Use the third instrument to retract one of the sliding doors, then use the other instrument’s arms to open the swinging door, which reveals the character bin below. Place the respective character in the bin. This task’s complexity is increased from level 2 due to the user being required to open several sets of doors (swinging and sliding doors) and coordinate using three instruments instead of only two. The user must also learn to avoid using excessive force and keep tools in the field of view to complete the exercise.

Ring and rail 1: A single ring and a solid curving rail appear. Pick up the ring and guide the ring along the rail. Take care to rotate the wrist accordingly as the rail curves to avoid applying excessive force to the instruments; avoid pulling or dragging the ring along the rail and avoid colliding the instruments with the rail.

Ring and rail 2: Three color-coded rails are intricately intertwined with three matching rings. Guide the colored rings along their respective rails. Take care to rotate your wrists accordingly as the rail curves to avoid colliding with the rails and applying excessive force to the instruments. Also, avoid pulling or dragging the ring along the rail. This task’s complexity is increased from level 1 due to the user being required to maneuver three rings on three intertwined rails over a larger workspace. This requires users to use the camera and clutch function more efficiently to complete the task.

Camera targeting 1: The screen appears with a dark blue crosshair in the center of the screen. Light blue spheres appear in random positions in the 3D workspace. A small arrow appears around the crosshair to guide the user to the sphere’s location. Practice zooming, rotating, and adjusting the camera’s position to change the field of view so the crosshair lines up with the sphere. Follow the small blue arrow to find the next sphere.

Camera targeting 2: The screen appears with a dark blue crosshair in the center of the screen. Camera targets (light blue spheres) appear in random positions in the 3D workspace. A small arrow appears around the crosshair to guide the user to the sphere’s location. Follow the arrow to find the target. Practice zooming, rotating, and adjusting the camera’s position to change the field of view so the crosshair lines up with the sphere. Once the target is deactivated, a stone resting on a floating platform will flash yellow. Pick up the stone with the flashing instrument. Follow the small blue arrow to find the next sphere, where you will need to deactivate another target and drop the stone into a flashing yellow basket. Follow the arrow to the next camera target. This level is more difficult than level 1 because it requires the user to manage smooth transitions between camera control tasks and EndoWrist manipulation tasks.

Scaling: Learn to use the touchpad to adjust the scaling ratios as quickly as (1.5:1), normal (2:1), and fine (3:1). Once the appropriate scaling setting has been selected, one of the targets will flash yellow. Use the cautery hook to touch the center of the flashing target. The target will stop flashing once the center has been touched and a new target will begin flashing. Use the camera and clutch functions as necessary to maneuver from target to target. Once all targets have been deactivated, follow the prompt to change the scaling setting. Under the fine setting, use the clutch and camera more actively to keep the tools centered in the field of view. Under the quick setting, move the instruments slowly and deliberately to touch the targets accurately.

Ring walk 1: Learn to effectively switch between camera control and EndoWrist manipulation tasks. Guide the flashing yellow ring along the vessel to the highlighted section of the vessel. A camera target (blue sphere) will appear. Adjust the camera to line up the crosshairs with the target to deactivate it. Alternate between moving the ring and adjusting the camera as prompted to do so.

Ring walk 2: Learn to effectively switch between camera control and EndoWrist manipulation tasks. Guide the flashing yellow ring along the vessel to the highlighted section of the vessel. A camera target (blue sphere) will appear. Adjust the camera to line up the crosshairs with the target to deactivate it. Alternate between moving the ring and adjusting the camera as prompted to do so. This level is more complex than level 1 in that the vessel is curvy and much longer. There are also obstacles along the vessel in the form of ring-like vessels attached to the cavity wall. The user must navigate through these obstacles by passing the ring from the left hand through the obstacle and to the right hand. Avoid moving the camera too far in one camera sweep to avoid losing sight of the instruments and to always maintain a view of them.

Ring walk 3: Learn to effectively switch between camera control and EndoWrist manipulation tasks. Guide the flashing yellow ring along the vessel to the highlighted section of the vessel. There are no camera targets at this level; instead, the user must practice retracting objects using the third instrument arm. This level is more complex than level 2 in that there are additional obstacles in the form of tissue flaps that obstruct the ring-like obstacles from level 2. Users must effectively use the third arm to retract the tissue flap and maneuver the ring along the vessel. Take care to keep all moving instruments in the field of view and to avoid using excessive force to retract the tissue flaps. Practice moving all three instrument arms together to keep tools centered in the field of view. Additionally, avoid crossing instruments and colliding them with each other.

Needle targeting: Select a colored needle from the suspended rack. Pierce the colored targets with the matching-colored needle. Drive the needle tip into the center of the large target and the smaller target behind it. Orient the needle in the best position so, with a smooth wrist rotation, the needle tip trajectory goes through the center of both targets. Once the needle has correctly hit both targets, the targets will turn green, and the needle will lock in place.

Thread the rings: Drive a needle and suture through several suspended rings (eyelets). The eyelets will sequentially flash yellow. Pass the needle through the flashing eyelet. Once successfully driven, the next eyelet will flash yellow. Hand off the needle between instruments to orient the needle at the best possible angle for driving the needle. Make sure to fully rotate your wrist to drive the needle smoothly and effectively through the eyelet opening.

Suture sponge 1: Drive the needle with the flashing yellow instrument through the yellow target. Once the first target turns green, rotate your wrist and drive the needle tip through the second yellow target until it turns green. Extract the needle and the next target will turn yellow. There are only two possible targets for this level; one will be the needle tip entry and the other will be the needle-tip exit. The target and driving instrument will flash yellow. These will change throughout the exercise.

Suture sponge 2: Drive the needle with the flashing yellow instrument through the yellow target. Once the first target turns green, rotate your wrist and drive the needle tip through the second yellow target until it turns green. Extract the needle and the next yellow target will appear. The locations of the targets will vary.

Suture sponge 3: Drive the needle with the flashing yellow instrument through the yellow target. Once the first target turns green, rotate your wrist and drive the needle tip through the second yellow target until it turns green. Extract the needle and the next yellow target will appear. The location and bite-size of the targets will vary.

Dots and needles 1: Drive the needle with the flashing yellow instrument through the yellow target. Once the first target turns green, rotate your wrist and drive the needle tip through the second yellow target until it turns green. Extract the needle and the next yellow target will appear. This exercise focuses on forehand throws.

Dots and needles 2: Drive the needle with the flashing yellow instrument through the yellow target. Once the first target turns green, rotate your wrist and drive the needle tip through the second yellow target until it turns green. Extract the needle and the next yellow target will appear. This exercise focuses on backhand throws.

Tubes: Drive the needle through the yellow side of the target and out the black side of the target. Drive the needle tip through the yellow target until it turns green. Extract the needle and the next yellow target will appear.

Energy switching 1: Follow the small blue arrow to find a target. Follow the instructions that appear on the screen. This will tell you which tool and which pedal to use. The target will disappear once you have applied enough energy for the ring around it to turn completely green.

Energy switching 2: For each target that appears, follow the instructions that appear on the screen. This will tell you which tool and which pedal to use. The target will disappear once you have applied enough energy for the ring around the target to turn completely green.

Energy dissection 1: Use either tool to cauterize the small branching blood vessels in two places, then cut these vessels, to isolate the large blood vessel.

Energy dissection 2: Use either tool to cauterize the small branching blood vessels in two places, then cut these vessels to isolate the large blood vessel. Watch out for bleeding vessels, which may appear randomly.

Energy dissection 3: Use either tool to cauterize the small branching blood vessels in two places, then cut these vessels to isolate the large blood vessel. Watch out for bleeding vessels, which may appear randomly. Be aware that the tools have changed hands.

Performance Scores: The simulator generated one score from 0 to 100 based on the user’s performance after each exercise was completed, where 0 points represented no valid performance to accomplish the simulated task, and 100 points represented performance that met all required standards. The da Vinci simulator program calculates performance scores using the following metrics:

  • Time to complete exercise – total time the user spent on the exercise (unit: seconds)
  • Economy of motion – total distance traveled by all instruments (unit: centimeters)
  • Instrument collisions – total number of instrument-on-instrument collisions that exceeded a minimum force threshold (total number)
  • Excessive instrument force – total time an excessive instrument force was applied above a prescribed threshold force. Forces on an instrument could arise from collisions with each other and from actions such as tissue retraction, driving a needle, or pulling on a suture. (unit: seconds)
  • Instruments out of view – total distance traveled by instruments outside of the user’s field of view (unit: centimeters)
  • Master workspace range – radius of user’s working volume on master grips (unit: centimeters)
  • Drops – number of times any object was dropped in an inappropriate region of the scene (total number)
  • Missed targets – number of missed targets (total number)
  • Misapplied energy time – total time incorrect energy was applied to a target or energy was enabled while not touching a target (unit: seconds)
  • Blood loss – the total volume of lost blood (no unit listed)
  • Broken vessels – number of broken vessels (total number)

Recording systems: Using eyeglasses, 20 eye gaze metrics were captured at a constant rate of 50 Hz. Brain activity was recorded at a constant rate of 500Hz using a 128-channel EEG headset. Four EEG leads, designed for electrooculogram recording, were not used.

The EEG and eye-gaze data were recorded simultaneously from each participant performing each exercise. The eego software from AntNeuro® was used to record EEG data (format: “*.edf”) using a 128-channel EEG headset at a frequency of 500 Hz, with Cz as the reference channel. Tobii pro controller software from Tobii ® was used to record eye gaze metrics at a frequency of 50 Hz.


Data Description

This dataset was recorded from July 2020 to November 2021 from 25 participants with different RAS experiences, ranging from 0 to more than 1000 hours of RAS experience. They performed 27 surgical tasks using a da Vinci simulator, and EEG and eye-gaze data were recorded simultaneously.

Each EEG file included the EEG data recorded using a 128-channel EEG headset. The quality of signals recorded at F8, POz, AF4, AF8, F6, and FC3 channels was poor for some recordings. The “hdr” structure in each EEG file contained the label of each EEG channel, the number of channels, and the duration of recording (seconds). The "patientID" and "recordID" variables in the "hdr" structure are de-identified values. The date in "patientID" and "recordID" variables in the "hdr" structure is the date on which the EEG file was converted to an edf file, not the data recording date. The raw EEG data will allow users the freedom to select their preferred methods for data interrogation.

The eye-gaze data were preprocessed using TobiiPro2®, which applies a moving average filter for noise reduction while the window size of three points was considered. The eye movement type of data points with an angular velocity of less (higher) than 30 degrees per second was defined as fixations (saccades), and if the eye has not been detected by the device, TobiiPro2 has classified the eye movement type at that data point as 'eye not found' or 'unknown'.

The eye gaze metrics included horizontal and vertical gaze points (X, Y; unit: pixels), gaze position in 3 dimensions (X, Y, Z; unit: millimeters), horizontal and vertical gaze directions for both eyes, pupil positions for both eyes in three dimensions (unit: millimeters), and pupil diameter of both eyes (unit: millimeters), and an eye movement type index (1: fixation, 2: saccade, 3: eye not found). Each EEG file included the EEG signals recorded by each lead, the sampling rate, and the name of each channel. Each eye-gaze file included 20 eye-gaze metrics. Each file has 20 columns, representing 20 measurements by eyeglasses. These measurements, from column 1 to column 20, are 1) Gaze point X, 2) Gaze point Y, 3) Gaze point 3D X, 4) Gaze point 3D Y, 5) Gaze point 3D Z, 6) Gaze direction left X, 7) Gaze direction left Y, 8) Gaze direction left Z, 9) Gaze direction right X, 10) Gaze direction right Y, 11) Gaze direction right Z, 12) Pupil position left X, 13) Pupil position left Y, 14) Pupil position left Z, 15) Pupil position right X, 16) Pupil position right Y, 17) Pupil position right Z, 18) Pupil diameter left, 19) Pupil diameter right, 20) Eye movement type index (1: Fixation; 2: Saccade; 0: Unknown).

The EYE folder includes 1559 files in "*.csv" format. The EEG folder includes 1636 files in "*.edf" format. The names of EEG files were deidentified as 'participantID_taskID_try.edf'. Eye files were deidentified as 'ParticipantID_TaskID_try.csv', where participantID is a number from 1 to 25 and TaskID is a number from 1 to 27. The names of tasks with ID numbers from 1 to 27 were mentioned in Table 1.

The performance score for each try of tasks 1 to 27 was generated by the simulator (scale: 0 to 100; where 0 points represented no valid performance to accomplish the simulated task, and 100 points represented performance that met all required standards). The “PerformanceScores.csv” file includes a summary of all EEG and Eye-gaze files, the age and dominant hand of the subjects, and their associated performance score (out of 100).

This dataset was used to develop models for performance and learning rate evaluation of RAS users performing simulator tasks [13], to identify functional brain network metrics associated with performance [14], and to train a deep neural network model to evaluate performance [15].


Usage Notes

Note: The ‘EEGHEOGRCPz', 'EEGHEOGLCPz', 'EEGVEOGUCPz', 'EEGVEOGLCPz' signals in EEG data are related to electrooculogram, and should not be used.

Limitations: The quality of signals recorded at F8, POz, AF4, AF8, F6, FC3 channels was poor for some recordings. Some EEG or eye-gaze data could not be recorded due to problems with the recording systems. The EEG recordings without synchronized eye-gaze, or eye-gaze data without synchronized EEG, can be found using Performancescore.csv file.

Data processing: GitHub code [16] can be used to load EEG data to MATLAB. The EEGLAB toolbox [17] can be used to pre-process EEG data. Eye-gaze data were pre-processed and are ready for analysis.

Reuse potential of the dataset: Pre-processed EEG and Eye gaze data can be used as input to deep neural network algorithms. The signal processing toolbox from MATLAB can be used for time-frequency analysis of EEG data and to extract Power Spectral Density (PSD) and coherence features from EEG data. Those features can be used as input to the machine learning algorithms. This dataset may be used to 1) determine the relationship among visual metrics, EEG features, and surgical performance; 2) develop models to predict RAS surgical performance using visual and EEG features; 3) determine whether task difficulty, age, and gender are significant factors affecting surgical performance and visual and EEG metrics; 4) create models to predict learning rate, and 5) identify significant changes in brain function during performance improvement.

This dataset was released for academic research, not for commercial usage.


Release Notes

1.0.0 initial release of the dataset.


Ethics

This study was conducted in accordance with relevant guidelines and regulations and was approved by the Roswell Park Comprehensive Cancer Center’s Institutional Review Board (IRB: I-241913). The IRB issued a waiver of documentation of consent and the participants were given a Research Study Information Sheet and provided verbal consent.


Acknowledgements

Research reported in this dataset was supported by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health under award number R01EB029398, and the Alliance Foundation of Roswell Park Comprehensive Cancer Center. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. This work was supported by National Cancer Institute (NCI) grant P30CA016056 involving the use of Roswell Park Comprehensive Cancer Center’s Comparative Oncology Shared Resource and the Applied Technology Laboratory for Advanced Surgery (ATLAS). The authors thank members of the Comparative Oncology Shared Resource and the ATLAS at Roswell Park Comprehensive Cancer Center, including Dr. Sandra Sexton, Dr. Leslie Curtin, and Dr. Khurshid A Guru, for providing equipment and services throughout the recording sessions. The authors would like to thank all participants in the study.


Conflicts of Interest

The authors declare that they have no conflicts of interest.


References

  1. Khan N, Abboudi H, Khan MS, Dasgupta P, Ahmed K. Measuring the surgical ‘learning curve’: methods, variables and competency. BJU international. 2014;113(3):504-8.
  2. Randell R, Alvarado N, Honey S, Greenhalgh J, Gardner P, Gill A, et al., editors. Impact of robotic surgery on decision making: perspectives of surgical teams. AMIA Annual Symposium Proceedings; 2015: American Medical Informatics Association.
  3. Lanfranco AR, Castellanos AE, Desai JP, Meyers WC. Robotic surgery: a current perspective. Annals of surgery. 2004;239(1):14.
  4. Diana M, Marescaux J. Robotic surgery. Journal of British Surgery. 2015;102(2):e15-e28.
  5. Giulianotti PC, Coratti A, Sbrana F, Addeo P, Bianco FM, Buchs NC, et al. Robotic liver surgery: results for 70 resections. Surgery. 2011;149(1):29-39.
  6. Narazaki K, Oleynikov D, Stergiou N. Robotic surgery training and performance. Surgical Endoscopy And Other Interventional Techniques. 2006;20(1):96-103.
  7. Loukas C, Georgiou E. Multivariate autoregressive modeling of hand kinematics for laparoscopic skills assessment of surgical trainees. IEEE transactions on biomedical engineering. 2011;58(11):3289-97.
  8. Gao Y, Vedula SS, Reiley CE, Ahmidi N, Varadarajan B, Lin HC, et al., editors. Jhu-isi gesture and skill assessment working set (jigsaws): A surgical activity dataset for human motion modeling. MICCAI workshop: M2cai; 2014.
  9. Ahmidi N, Hager GD, Ishii L, Fichtinger G, Gallia GL, Ishii M, editors. Surgical task and skill classification from eye tracking and tool motion in minimally invasive surgery. International Conference on Medical Image Computing and Computer-Assisted Intervention; 2010: Springer.
  10. Shafiei SB, Jing Z, Attwood K, Iqbal U, Arman S, Hussein AA, et al. Association between Functional Brain Network Metrics and Surgeon Performance and Distraction in the Operating Room. Brain sciences. 2021;11(4):468.
  11. Shafiei SB, Hussein AA, Guru KA. Dynamic changes of brain functional states during surgical skill acquisition. PloS one. 2018;13(10):e0204836.
  12. Nemani A, Kruger U, Cooper CA, Schwaitzberg SD, Intes X, De S. Objective assessment of surgical skill transfer using non-invasive brain imaging. Surgical Endoscopy. 2019;33(8):2485-94.
  13. Development of performance and learning rate evaluation models in robot-assisted surgery using electroencephalogram and eye tracking
  14. Developing cognitive workload and performance evaluation models using functional brain network analysis
  15. Classification of cognitive performance levels using deep neural network trained by Electroencephalogram data.
  16. edfread function for Matlab. https://github.com/wisorlab/Matlab/blob/master/generating/edfread.m [Accessed: 13 July 2023]
  17. Website for the EEGLAB Matlab Toolbox. https://sccn.ucsd.edu/eeglab/ [Accessed: 13 July 2023]

Share
Access

Access Policy:
Anyone can access the files, as long as they conform to the terms of the specified license.

License (for files):
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License

Discovery

DOI (version 1.0.0):
https://doi.org/10.13026/qj5m-n649

DOI (latest version):
https://doi.org/10.13026/9m3f-ac20

Corresponding Author
You must be logged in to view the contact information.

Files

Total uncompressed size: 33.2 GB.

Access the files

Visualize waveforms

Folder Navigation: <base>
Name Size Modified
EEG
EYE
Figure1.png (download) 3.7 MB 2022-11-28
LICENSE.txt (download) 0 B 2023-01-24
PerformanceScores.csv (download) 75.1 KB 2022-11-29
RECORDS (download) 24.4 KB 2023-01-18
SHA256SUMS.txt (download) 250.9 KB 2023-07-14
Table1.csv (download) 1.9 KB 2022-11-29