HyperSpectral (HS) images are commonly used for classification tasks in different domains, such as medicine. In this field, a recent use is the differentiation between healthy tissues and different types of cancerous tissues. To this end, different machine learning techniques have been proposed to generate classification maps that indicate the type of tissue corresponding to each pixel in the image. These 2D representations can be used stand-alone, but they can not be properly registered with other valuable data sources like Magnetic Resonance Imaging (MRI), which can improve the accuracy of the system. For this reason, this paper builds the foundations of a multi-modal classification system that will incorporate 3D information into HS images. Specifically, we address the acceleration of one of the hotspots in depth estimation tools/algorithms. MPEG-I Depth Estimation Reference Software (DERS) provides high-quality depth maps relying on a global energy optimizer algorithm: Graph Cuts. However, this algorithm needs huge processing times, preventing its use during surgical operations. This work introduces GoRG (Graph cuts Reference depth estimation in GPU), a GPU accelerated DERS able to produce depth maps from RGB and HS images. In this paper, due to the lack of HS multi-view datasets at the moment, results are reported on RGB images to validate the acceleration strategy. GoRG shows a ×25 average speed-up compared to baseline DERS 8.0, reducing total computation time from around one hour for 8 frames to only a few minutes. A consequence of our parallelization is an average decrease of 1.6 dB in Weighted-to-Spherically-Uniform Peak-Signal-to-Noise-Ratio (WS-PSNR), with some remarkable disparities approaching 4 dB. However, using Structural Similarity Index (SSIM) as metric results come closer to baseline DERS. Effectively, an average decrease of only 1.20% is achieved showing that the obtained speed-up gains compensate the subjective quality losses.