iconLogo
Published:2026/1/5 15:41:03

解像度気にしないOCT画像解析、爆誕!🎉(INR活用)

1. OCT画像を超絶精度で解析!

2. ギャル的キラキラポイント✨ ● OCT (目の検査) の画像を、解像度 (画像の細かさ) に関係なく解析できるようになったってこと! ● 3Dで網膜(目の奥の組織)を詳細に見れるから、病気が見つけやすくなるんだって👀 ● AIが賢くなって、色んな条件の画像に対応できるようになったから、色んな病院で使えるね!

3. 詳細解説 ● 背景: 目の検査で使うOCT画像って、解像度とか撮り方で画像が違うから、AIが解析しにくかったの。 ● 方法: 「INR」ってやつを使って、解像度に縛られない(依存しない)3Dモデルを作った!すご! ● 結果: 網膜の構造をめっちゃ詳しく見れるようになったし、病気も発見しやすくなった! ● 意義: いろんなOCT画像に対応できるから、色んな病院で役立つし、診断がもっと正確になるかも!

4. リアルでの使いみちアイデア💡 ● 眼科(目の病院)で、先生たちがもっと簡単に、正確に病気を診断できるようになるね! ● 患者さんが、自分の目の状態を詳しく知ることができて、安心できるかも!

続きは「らくらく論文」アプリで

Don't Mind the Gaps: Implicit Neural Representations for Resolution-Agnostic Retinal OCT Analysis

Bennet Kahrs / Julia Andresen / Fenja Falta / Monty Santarossa / Heinz Handels / Timo Kepp

Routine clinical imaging of the retina using optical coherence tomography (OCT) is performed with large slice spacing, resulting in highly anisotropic images and a sparsely scanned retina. Most learning-based methods circumvent the problems arising from the anisotropy by using 2D approaches rather than performing volumetric analyses. These approaches inherently bear the risk of generating inconsistent results for neighboring B-scans. For example, 2D retinal layer segmentations can have irregular surfaces in 3D. Furthermore, the typically used convolutional neural networks are bound to the resolution of the training data, which prevents their usage for images acquired with a different imaging protocol. Implicit neural representations (INRs) have recently emerged as a tool to store voxelized data as a continuous representation. Using coordinates as input, INRs are resolution-agnostic, which allows them to be applied to anisotropic data. In this paper, we propose two frameworks that make use of this characteristic of INRs for dense 3D analyses of retinal OCT volumes. 1) We perform inter-B-scan interpolation by incorporating additional information from en-face modalities, that help retain relevant structures between B-scans. 2) We create a resolution-agnostic retinal atlas that enables general analysis without strict requirements for the data. Both methods leverage generalizable INRs, improving retinal shape representation through population-based training and allowing predictions for unseen cases. Our resolution-independent frameworks facilitate the analysis of OCT images with large B-scan distances, opening up possibilities for the volumetric evaluation of retinal structures and pathologies.

cs / cs.CV