Volume Renderring
Volume Renderring並不是個真實的物體,而是由全部的切片圖檔,經由內插的演算法,所跑出來(Renderring)的類似3d的物體。
內插的演算法有很多種:
vtkImage Reader/Writer
vtkPolyData
StructuredPoints
A subclass of ImageData.
StructuredPoints is a subclass of ImageData that requires the data extent to exactly match the update extent. Normall image data allows that the data extent may be larger than the update extent. StructuredPoints also defines the origin differently that vtkImageData. For structured points the origin is the location of first point. Whereas images define the origin as the location of point 0, 0, 0. Image Origin is stored in ivar, and structured points have special methods for setting/getting the origin/extents.
vtkTriangleStrip
a cell that represents a triangle strip
vtkTriangleStrip is a concrete implementation of vtkCell to represent a 2D triangle strip. A triangle strip is a compact representation of triangles connected edge to edge in strip fashion. The connectivity of a triangle strip is three points defining an initial triangle, then for each additional triangle, a single point that, combined with the previous two points, defines the next triangle.
vtkPolyData
concrete dataset represents vertices, lines, polygons, and triangle strips
vtkPolyData is a data object that is a concrete implementation of vtkDataSet. vtkPolyData represents a geometric structure consisting of vertices, lines, polygons, and/or triangle strips. Point and cell attribute values (e.g., scalars, vectors, etc.) also are represented.
The actual cell types (vtkCellType.h) supported by vtkPolyData are: vtkVertex, vtkPolyVertex, vtkLine, vtkPolyLine, vtkTriangle, vtkQuad, vtkPolygon, and vtkTriangleStrip.
One important feature of vtkPolyData objects is that special traversal and data manipulation methods are available to process data. These methods are generally more efficient than vtkDataSet methods and should be used whenever possible. For example, traversing the cells in a dataset we would use GetCell(). To traverse cells with vtkPolyData we would retrieve the cell array object representing polygons (for example using GetPolys()) and then use vtkCellArray's InitTraversal() and GetNextCell() methods.
Warning:
Because vtkPolyData is implemented with four separate instances of vtkCellArray to represent 0D vertices, 1D lines, 2D polygons, and 2D triangle strips, it is possible to create vtkPolyData instances that consist of a mixture of cell types. Because of the design of the class, there are certain limitations on how mixed cell types are inserted into the vtkPolyData, and in turn the order in which they are processed and rendered. To preserve the consistency of cell ids, and to insure that cells with cell data are rendered properly, users must insert mixed cells in the order of vertices (vtkVertex and vtkPolyVertex), lines (vtkLine and vtkPolyLine), polygons (vtkTriangle, vtkQuad, vtkPolygon), and triangle strips (vtkTriangleStrip).
Some filters when processing vtkPolyData with mixed cell types may process the cells in differing ways. Some will convert one type into another (e.g., vtkTriangleStrip into vtkTriangles) or expect a certain type (vtkDecimatePro expects triangles or triangle strips; vtkTubeFilter expects lines). Read the documentation for each filter carefully to understand how each part of vtkPolyData is processed.
OpenGL Links
- 紅皮書
- OpenGL Intro
- GL Tutorial
- Volume Viewing
- OpenGL Intro
- Mesa 3D Graphics Library
- OpenGL Tut
- OpenGL Tut
- OpenGL Lessons
Intel sponsored Tech Progjects
在Intel的大力贊助下,成立了Education 2000 program,這個計劃包含6個computer science領域及 UNC School of Education。
- Computer Graphics
- 3D Medical Imaging
- Multimedia Networking
- Molecular Modeling
- New Laboratories
- LEARN NC
詳情可看Tech4Edu
MIDAS
設定TwinView選項
在Device之後要對TwinView加入下列的選項
Option "TwinView" #一定要有,否則以下的選項全部無效 Option "SecondMonitorHorizSync" "" #第2個Monitor的水平頻率 Option "SecondMonitorVertRefresh" " " #第2個Monitor的垂直頻率 Option "MetaModes" " " #設定MetaModes Option "TwinViewOrientation" "
" #設定Orientation Option "ConnectedMonitor" " " #設定ConnectedMonitor
Option "MetaModes" "1280x1024,1280x1024; 1024x768,1024x768" #Modes given in pairs seperated by semi-colons. Option "MetaModes" "1600x1200 +0+0, 1024x768 +1600+0;" #Offset descriptions follow the conventions of the X "-geometry" commandline option. Both positive and negative offsets are valid, although negative offsets are only allowed when a virtual screen size is set explicitly in the X-config file.
nvidia Installation
安裝之前
請先離開X-window及關閉所有的OpenGL程式(因為有些OpenGL程式,即使沒有X-window,也可以執行),最好是在vag mode only(runlevel 1)下,比較好偵錯。Kernel Source
請用`uname -a`找出linux kernel的版本,此時系統為2.6.12-10-386,所以就要以此字串來找出所對應的kernel source,因為系統中的開機image是使用linux-image-2.6.12-10-386,當然也可以使用linux-image-2.6.12-10-686, linux-image-2.6.12-10-686-smp。所以要安裝所對應的linux-headers,當安裝/移除linux-headers-2.6.12-10-386,另一個linux-headers-2.6.12-10也會安裝/移除。samuel@pika046:doc$ AptSearch 2.6.12-10 linux-headers-2.6.12-10 - Header files related to Linux kernel version 2.6.12 linux-headers-2.6.12-10-386 - Linux kernel headers 2.6.12 on 386 不然在安裝的過程中,就會出現類似的錯誤訊息: /usr/src/linux-headers-2.6.12-10/scripts/gcc-version.sh: 沒有此一檔案或目錄
MetaModes
A single MetaMode describes what mode should be used on each display
device at a given time. Multiple MetaModes list the combinations of modes
and the sequence in which they should be used. When the NVIDIA driver
tells X what modes are available, it is really the minimal bounding box of
the MetaMode that is communicated to X, while the "per display device"
mode is kept internal to the NVIDIA driver. In MetaMode syntax, modes
within a MetaMode are comma separated, and multiple MetaModes are
separated by semicolons. For example:
", ; , "
Where is the name of the mode to be used on display device 0
concurrently with used on display device 1. A mode switch
will then cause to be used on display device 0 and to be used on display device 1.
Here is a real MetaMode entry from the X config sample config file:
Option "MetaModes" "1280x1024,1280x1024; 1024x768,1024x768"
If you want a display device to not be active for a certain MetaMode, you
can use the mode name "NULL", or simply omit the mode name entirely:
"1600x1200, NULL; NULL, 1024x768"
or
"1600x1200; , 1024x768"
Optionally, mode names can be followed by offset information to control
the positioning of the display devices within the virtual screen space;
e.g.:
"1600x1200 +0+0, 1024x768 +1600+0; ..."
Offset descriptions follow the conventions used in the X "-geometry"
command line option; i.e. both positive and negative offsets are valid,
though negative offsets are only allowed when a virtual screen size is
explicitly given in the X config file.
When no offsets are given for a MetaMode, the offsets will be computed
following the value of the TwinViewOrientation option (see below). Note
that if offsets are given for any one of the modes in a single MetaMode,
then offsets will be expected for all modes within that single MetaMode;
in such a case offsets will be assumed to be +0+0 when not given.
When not explicitly given, the virtual screen size will be computed as the
the bounding box of all MetaMode bounding boxes. MetaModes with a bounding
box larger than an explicitly given virtual screen size will be discarded.
A MetaMode string can be further modified with a "Panning Domain"
specification; e.g.:
"1024x768 @1600x1200, 800x600 @1600x1200"
A panning domain is the area in which a display device's viewport will be
panned to follow the mouse. Panning actually happens on two levels with
TwinView: first, an individual display device's viewport will be panned
within its panning domain, as long as the viewport is contained by the
bounding box of the MetaMode. Once the mouse leaves the bounding box of
the MetaMode, the entire MetaMode (i.e. all display devices) will be
panned to follow the mouse within the virtual screen. Note that individual
display devices' panning domains default to being clamped to the position
of the display devices' viewports, thus the default behavior is just that
viewports remain "locked" together and only perform the second type of
panning.
The most beneficial use of panning domains is probably to eliminate dead
areas -- regions of the virtual screen that are inaccessible due to
display devices with different resolutions. For example:
"1600x1200, 1024x768"
produces an inaccessible region below the 1024x768 display. Specifying a
panning domain for the second display device:
"1600x1200, 1024x768 @1024x1200"
provides access to that dead area by allowing you to pan the 1024x768
viewport up and down in the 1024x1200 panning domain.
Offsets can be used in conjunction with panning domains to position the
panning domains in the virtual screen space (note that the offset
describes the panning domain, and only affects the viewport in that the
viewport must be contained within the panning domain). For example, the
following describes two modes, each with a panning domain width of 1900
pixels, and the second display is positioned below the first:
"1600x1200 @1900x1200 +0+0, 1024x768 @1900x768 +0+1200"
Because it is often unclear which mode within a MetaMode will be used on
each display device, mode descriptions within a MetaMode can be prepended
with a display device name. For example:
"CRT-0: 1600x1200, DFP-0: 1024x768"
If no MetaMode string is specified, then the X driver uses the modes
listed in the relevant "Display" subsection, attempting to place matching
modes on each display device. Volume rendering/3d Reconstruction
- WidgetsCxxTests.o
- BoxWidget.o
- TestImplicitPlaneWidget.o
- TestOrientationMarkerWidget.o
- ImagePlaneWidget.o
- TestSplineWidget.o
- TestScalarBarWidget.o
- TestLineWidget.o
- TestPlaneWidget.o
- TestPointWidget.o
- TestImageTracerWidget.o
VTK installation
1. Lsi用的參數 ./configure --with-x --with-tcl --with-tkwidget --with-patented --with-contrib --with-sharedMsg #1 Msg#2
1. Vtk & OpenGL
http://www.vtk.org/Wiki/VTK_OpenGL#Setup_VTK
1. 所有的參數
[samuel@palm141 vtk3.2]$ ./configure --help
Usage: configure [options] [host] Options: [defaults in brackets after descriptions]
Configuration:
--cache-file=FILE cache test results in FILE
--help print this message
--no-create do not create output files
--quiet, --silent do not print `checking...' messages
--version print the version of autoconf that created configure
Directory and file names:
--prefix=PREFIX install architecture-independent files in PREFIX
[/usr/local]
--exec-prefix=EPREFIX install architecture-dependent files in EPREFIX
[same as prefix]
--bindir=DIR user executables in DIR [EPREFIX/bin]
--sbindir=DIR system admin executables in DIR [EPREFIX/sbin]
--libexecdir=DIR program executables in DIR [EPREFIX/libexec]
--datadir=DIR read-only architecture-independent data in DIR
[PREFIX/share]
--sysconfdir=DIR read-only single-machine data in DIR [PREFIX/etc]
--sharedstatedir=DIR modifiable architecture-independent data in DIR
[PREFIX/com]
--localstatedir=DIR modifiable single-machine data in DIR [PREFIX/var]
--libdir=DIR object code libraries in DIR [EPREFIX/lib]
--includedir=DIR C header files in DIR [PREFIX/include]
--oldincludedir=DIR C header files for non-gcc in DIR [/usr/include]
--infodir=DIR info documentation in DIR [PREFIX/info]
--mandir=DIR man documentation in DIR [PREFIX/man]
--srcdir=DIR find the sources in DIR [configure dir or ..]
--program-prefix=PREFIX prepend PREFIX to installed program names
--program-suffix=SUFFIX append SUFFIX to installed program names
--program-transform-name=PROGRAM
run sed PROGRAM on installed program names
Host type:
--build=BUILD configure for building on BUILD [BUILD=HOST]
--host=HOST configure for HOST [guessed]
--target=TARGET configure for TARGET [TARGET=HOST]
Features and packages:
--disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no)
--enable-FEATURE[=ARG] include FEATURE [ARG=yes]
--with-PACKAGE[=ARG] use PACKAGE [ARG=yes]
--without-PACKAGE do not use PACKAGE (same as --with-PACKAGE=no)
--x-includes=DIR X include files are in DIR
--x-libraries=DIR X library files are in DIR
--enable and --with options recognized:
--with-x use the X Window System
--with-sproc use sproc instead of pthreads if possible
--with-mesa create the mesa specific renderer
--with-opengl create the opengl specific renderer
--with-tkwidget build the vtkTkRenderWidget class
--with-tkwidgets build the vtkTkRenderWidget class
--with-mpi create the mpi specific controller
--with-shared create shared libraries
--with-tcl build vtk the tcl based interpreter
--with-java build vtk with java support
--with-python build vtk with python support
--without-graphics do not include the graphics classes
--without-imaging do not include the imaging classes
--with-tkwidget build the vtkTkRenderWidget class
--with-patented include the patented classes
--with-contrib include the contrib classes
--with-local include the local classes
--with-bsdmake uses bsd style makefile includes
--with-java build vtk with java support
從mailing list中得取:
A couple things to suggest - look at the CMakeCache.txt file to make certain it's point to the correct OpenGL libraries.
nVidia & OpenGL
http://www.gentoo.org.tw/doc/nvidia-guide.xml Gentoo Linux nVidia 指南 (最後更新:2005-07-15) 內容: 1. 簡介 nVidia 自己發表了他們的 Linux 驅動程式,提供很好的效能及 3D 加速。驅動程式分成兩部份:nvidia-kernel 及 nvidia-glx。 nvidia-kernel 是處理與顯示卡間低階通訊的核心驅動程式。他只是一個叫作 nvidia 的核心模組,他安裝時依賴你的核心,並且在你用 nvidia 驅動程式時要載入他。 核心驅動程式以外,你也需要安裝 X11 GLX 層的程式 (nvidia-glx)。這是讓 X 繪製圖形用的,他內部用 nvidia-kernel 核心驅動程式和硬體溝通。 2. 設定您的顯示卡 安裝適當的驅動程式 如同上面提到的,nVidia 核心驅動程式依賴你的核心進行安裝。他編譯成模組,所以你的核心要支援模組載入。如果你使用 genkernel 設定核心,那一切都沒問題。如果沒有的話,再三確認核心有支援>這個功能: 原始碼2.1: 啟動模組載入支援 Loadable module support ---> [*] Enable loadable module support 你也需要啟動核心中的 Memory Type Range Register: 原始碼2.2: 啟動 MTRR Processor and Features ---> [*] MTRR (Memory Type Range Register) support nVidia 的模組以及函式庫分別放在兩個套件中:nvidia-glx 及 nvidia-ke rnel。前者為 X11 GLX 函式庫,而後者為核心模組。您兩個都需要,所以應該趁現在安裝他們。 nvidia-kernel ebuild 會依據 /usr/src/linux 符號連結決定核心的版本。請確定這個符號連結有指向你使用的核心,並且已經正確設定好了。請參閱 安裝手冊 取得設定核心的詳細資訊。 如果你使用 gentoo-sources-2.6.11-r6,你的 /usr/src 目錄看起來像這樣: 原始碼2.3: 檢查 /usr/src/linux 符號連結 # cd /usr/src # ls -l (確認 linux 指向正確的目錄) lrwxrwxrwx 1 root root 22 Apr 23 18:33 linux -> linux-2.6.11-gentoo-r6 drwxr-xr-x 4 root root 120 Apr 8 18:56 linux-2.4.26-gentoo-r4 drwxr-xr-x 18 root root 664 Dec 31 16:09 linux-2.6.10 drwxr-xr-x 18 root root 632 Mar 3 12:27 linux-2.6.11 drwxr-xr-x 19 root root 4096 Mar 16 22:00 linux-2.6.11-gentoo-r6 在上面的輸出中,你會看到 linux 符號連結指向 linux-2.6.11-gentoo-r6 核心。 如果符號連結沒有指向正確的核心,你需要像這樣更新連結: 原始碼2.4: 建立/更新 /usr/src/linux 符號連結 # cd /usr/src # ln -snf linux-2.6.11-gentoo-r6 linux 因為 nvidia-glx 依賴 nvidia-kernel,安裝 nvidia-glx 就足夠了。 原始碼2.5: 安裝 nVidia 驅動程式 # emerge nvidia-glx 重要: 當你每次 編譯新核心 或重新編譯目前的,你需要執行 emerge nvidia-kernel 重新安裝 nVidia 模組。nvidia-glx 並不會受到核心改變的影響,也不用在重新編譯/升級 X 時重新編譯。 當安裝程式結束以後,請執行 modprobe nvidia 以將核心模組載入記憶體。 原始碼2.6: 載入核心模組 # modprobe nvidia 為了避免在每次開機都要載入模組,您大概想在每次開機的時候都自動將此模組載入,所以請編輯 /etc/modules.autoload.d/kernel-2.6 (或 kernel-2.4,依你的核心版本決定),並在裡面加上 nvidia>。別忘了在儲存退出以後執行 modules-update! 原始碼2.7: 執行 modules-update # modules-update 設定 X Server 當安裝了適當的驅動程式以後,您必須設定 X Server (XFree86 或 Xorg),讓它使用 nvidia 驅動程式,而不是預設的 nv 驅動程式。 使用您喜愛的文字編輯器 (例如 nano 或 vim) 開啟 /etc/X11/xorg.conf (或是你仍使用舊的設定檔,用 /etc/X11/XF86Config),接著來到 Device 小節。在此小節中,更改有關 Driver 那行: 原始碼2.8: 在 X Server 設定檔中將 nv 改成 nvidia Section "Device" Identifier "nVidia Inc. GeForce2" Driver "nvidia" VideoRam 65536 EndSection 然後到 Module 小節然後確定載入 glx 模組以及 dri 沒有載入: 原始碼2.9: 更新 Module 小節 Section "Module" (...) # Load "dri" Load "glx" (...) EndSection 接著,在 Screen 小節中,確定 DefaultDepth 設定成 16 或 24。沒有的話, nvidia-glx 不會啟動。 原始碼2.10: 更新 Screen 小節 Section "Screen" (...) DefaultDepth 16 Subsection "Display" (...) EndSection 執行 opengl-update 讓 X Server 使用 nVidia GLX 函式庫。 原始碼2.11: 執行 opengl-update # opengl-update nvidia 新增您的使用者到Video群組中 您必需增加使用者到Vedio群組中,這樣它才有權限存取nvidia裝置檔: 原始碼2.12: 增加使用者到Vedio群組中 # gpasswd -a youruser video 如果您不使用udev這動作也許不是全不都要做,但是這並不會影響和使您的系統變爛:p 測試您的顯示卡 要測試您的 nVidia 顯示卡,啟動 X 然後執行 glxinfo | grep direct 指令,它應該告訴您直接貼圖已經啟動: 原始碼2.13: 檢查直接貼圖狀態 $ glxinfo | grep direct direct rendering: Yes 要測試您的 FPS,請執行 glxgears。 啟動 nvidia 支援 一些工具,像是 mplayer 及 xine-lib,使用他們自己的 USE 設定, "nvidia" 啟動 XvMCNVIDIA 的支援,在觀看高解析度電影時很有用。在 /etc/make.conf USE 變數中加入 "nvidia" 或在 /etc/portage/package.use 中的 media-video/mplayer 及 media-libs/xine-lib 加入。 然後,執行 emerge -uD --newuse world 重新編譯會受到這改變影響的程式。 3. 解決問題 在有 4Gb 以上記憶體的電腦讓 2D 工作 如果你在 nVidia 2D 加速上遇到困難,可能是你不能啟動 MTRR 的 write-combining range。要確認的話,檢查 /proc/mtrr 的內容: 原始碼3.1: 檢查是否啟動 write-combining # cat /proc/mtrr 每一行都應該包含 "write-back" 或 "write-combining"。如果你看到有一行有 "uncachable" 你需要改變 BIOS 設定來修正。 重開機然後進入 BIOS,找到 MTRR 設定(通常在 "CPU Settings" 中)。把他的設定從 "continuous" 改成 "discrete",然後開機回到 Linux。你會發現不再有 "uncachable" 而且 2D 加速可以正常工作>。 我碰到關於不支援 4K 堆疊大小的錯誤 版本1.0.6106 之前的 nvidia-kernel 只支援 8K 堆疊大小。比較新的核心 (2.6.6 及更新的) 已經支援 4K 堆疊大小。在核心設定中不要選擇 4K 堆疊大小。你可以在 Kernel Hacking 區域中找到這項>設定。 4. 進階設定 文件 nVidia 驅動程式套件也包含廣泛的文件。這安裝到 /usr/share/doc 目錄中,你可以用底下的指令瀏覽: 原始碼4.1: 瀏覽 NVIDIA 文件 # less /usr/share/doc/nvidia-glx-*/README.txt.gz 核心模組參數 nvidia 核心模組接受許多的參數(選項),讓你調整驅動程式的行為。編輯 /etc/modules.d/nvidia 加入或改變這些參數。記得在修改後執行 modules-update。另外要記在心上的是要重新載入 nvidia 新的設定才會生效。 進階 X 設定 GLX 層也有眾多的選項可以設定。這些設定控制 TV 輸出,多顯示器,顯示器頻率偵測等等。同樣的,所有的選項在文件中有清楚的說明。 如果你想要設定某些選項,你需要在 X 設定檔(通常是 /etc/X11/xorg.conf)中相對應的裝置區域加入。例如說,我想要關閉起始商標畫面: 原始碼4.2: X 設定檔中的進階 nvidia 設定 Section "Device" Identifier "nVidia Inc. GeForce2" Driver "nvidia" Option "NoLogo" "true" VideoRam 65536 EndSection
vtk documentation
在使用ccmake時選定了BUILD_DOCUMENTATION=ON時,需安裝doxygen套件,但是在make的時候,並不會自動將文件編譯起來,而是要透過下式方式來達成:
samuel@pika047:Doxygen$ sh ./doc_makeall.sh
doc_makeall.sh透過呼叫,Sebastien Barre等人所寫的doc_header2doxygen.pl來作出文件集。
vtk是很強大的繪圖library,但是並沒有提供documentation,所以就上網找了一下,以下是文件集的網址列表,最新的版本為VTK-5.1.0:
- http://public.kitware.com/VTK/doc/nightly/html/
- http://www.barre.nom.fr/vtk/links-doc.html
- http://www.barre.nom.fr/medical/these/pictures.html
- http://www.barre.nom.fr/vtk/
- IvI Diagram
- http://www.vtk.org/documents.php
DICOM醫學影像格式@python
事實上,itk己經將dicom檔案的Reader/Writer,都己經寫好,己將gdcm的模組包進來了,在它的下載網頁中,其CVS access如下:
cvs -d :pserver:anoncvs@www.itk.org:/cvsroot/Insight login(Empty passwd) cvs -d :pserver:anoncvs@www.itk.org:/cvsroot/Insight co Insight cvs -d :pserver:anoncvs@www.itk.org:/cvsroot/Insight co InsightDocuments(文件) cvs -d :pserver:anoncvs@www.itk.org:/cvsroot/Insight co InsightApplications (應用程式)
dicom檔案格式好像有包含很多有用的資訊,如調片資料,可藉由輸入病例號,申請序號等的方式來取得所需之影像,免去調借片的麻煩;亦即將影像與病人的資料與歷史結合在一起,讓影像與資料結合。
這裡有一篇Sebastien Barre,討論關於Dicom Format/Reader的文章。
也有vtk同好 -- dgobbi,教導vtk課程,也有關於Imaging的部份。
David的DICOM parser,是用c寫的,裡頭有豐富的介紹,可能要詳讀一番。
為了讀取DICOM的醫學用途影像檔,避開vtk library的複雜性,所以想要借助python在處理影像方面的套件 -- Imaging。但在Ubuntu-5.10下安裝Imaging,需要下列相依套件的支援,所以就分別安裝了相對應的套件如下:
JPEG -- libjpeg62-dev ZLIB -- zlib1g, zlib1g-dev FREETYPE2 -- libfreetype6-dev
在網路上surfing了一會,找到了物理與python同好 -- Miller,但是用Miller所寫的dicom.py,好像有點問題,無法顯示圖檔,我得另行想辦法,只好轉向vtkpython了。
GL/gl.h header file missing
Vtk Tutorial Examples
範例程式 Class To Examples (3..B) c2_vtk_e_0.html Class To Examples (C..E) c2_vtk_e_1.html Class To Examples (F..M) c2_vtk_e_2.html Class To Examples (O) c2_vtk_e_3.html Class To Examples (P) c2_vtk_e_4.html Class To Examples (Q..R) c2_vtk_e_5.html Class To Examples (S) c2_vtk_e_6.html Class To Examples (T..X) c2_vtk_e_7.html
vtkPolyDataMapper *coneMapper = vtkPolyDataMapper::New();
coneMapper->SetInput( cone->GetOutput() );
import vtk
import time
cone = vtk.vtkConeSource()
cone.SetHeight( 3.0 )
cone.SetRadius( 1.0 )
cone.SetResolution( 10 )
coneMapper = vtk.vtkPolyDataMapper()
coneMapper.SetInputConnection( cone.GetOutputPort() )
coneActor = vtk.vtkActor()
coneActor.SetMapper( coneMapper )
ren1= vtk.vtkRenderer()
ren1.AddActor( coneActor )
ren1.SetBackground( 0.1, 0.2, 0.4 )
renWin = vtk.vtkRenderWindow()
renWin.AddRenderer( ren1 )
renWin.SetSize( 300, 300 )
for i in range(0,360):
time.sleep(0.03)
renWin.Render()
ren1.GetActiveCamera().Azimuth( 1 )
#include "vtkConeSource.h"
#include "vtkPolyDataMapper.h"
#include "vtkRenderWindow.h"
#include "vtkCamera.h"
#include "vtkActor.h"
#include "vtkRenderer.h"
int main()
{ vtkConeSource *cone = vtkConeSource::New();
cone->SetHeight( 3.0 );
cone->SetRadius( 1.0 );
cone->SetResolution( 10 );
vtkPolyDataMapper *coneMapper = vtkPolyDataMapper::New();
coneMapper->SetInputConnection( cone->GetOutputPort() );
vtkActor *coneActor = vtkActor::New();
coneActor->SetMapper( coneMapper );
vtkRenderer *ren1= vtkRenderer::New();
ren1->AddActor( coneActor );
ren1->SetBackground( 0.1, 0.2, 0.4 );
vtkRenderWindow *renWin = vtkRenderWindow::New();
renWin->AddRenderer( ren1 );
renWin->SetSize( 300, 300 );
int i;
for (i = 0; i < 360; ++i)
{ renWin->Render();
ren1->GetActiveCamera()->Azimuth( 1 ); }
cone->Delete();
coneMapper->Delete();
coneActor->Delete();
ren1->Delete();
renWin->Delete();
return 0; }
#Cone.py
import vtk
import time
def myCallback(obj,string): print "Starting a render"
cone = vtk.vtkConeSource()
cone.SetHeight( 3.0 )
cone.SetRadius( 1.0 )
cone.SetResolution( 10 )
coneMapper = vtk.vtkPolyDataMapper()
coneMapper.SetInput( cone.GetOutput() )
coneActor = vtk.vtkActor()
coneActor.SetMapper( coneMapper )
ren1= vtk.vtkRenderer()
ren1.AddActor( coneActor )
ren1.SetBackground( 0.1, 0.2, 0.4 )
ren1.AddObserver("StartEvent", myCallback)
renWin = vtk.vtkRenderWindow()
renWin.AddRenderer( ren1 )
renWin.SetSize( 300, 300 )
for i in range(0,360):
time.sleep(0.03)
renWin.Render()
ren1.GetActiveCamera().Azimuth( 1 )
#vtkpython Examples/Tutorial/Step1/Python/Cone.py import vtk import time cone = vtk.vtkConeSource() cone.SetHeight( 3.0 ) cone.SetRadius( 1.0 ) cone.SetResolution( 10 ) coneMapper = vtk.vtkPolyDataMapper() coneMapper.SetInput( cone.GetOutput() ) coneActor = vtk.vtkActor() coneActor.SetMapper( coneMapper ) ren1= vtk.vtkRenderer() ren1.AddActor( coneActor ) ren1.SetBackground( 0.1, 0.2, 0.4 ) renWin = vtk.vtkRenderWindow() renWin.AddRenderer( ren1 ) renWin.SetSize( 300, 300 ) for i in range(0,360): time.sleep(0.03) renWin.Render() ren1.GetActiveCamera().Azimuth( 1 )
#Another example using Tkinter from Tkinter import * from VTK import * # Make a root window root = Tk() # Add a vtkTkRenderWidget renderWidget = vtkTkRenderWidget(root,width=400,height=400) renderWidget.pack(expand='true',fill='both') # Get the render window from the widget renWin = renderWidget.GetRenderWindow() # Next, do the VTK stuff ren = vtkRenderer() renWin.AddRenderer(ren) cone = vtkConeSource() cone.SetResolution(16) coneMapper = vtkPolyDataMapper() coneMapper.SetInput(cone.GetOutput()) coneActor = vtkActor() coneActor.SetMapper(coneMapper) ren.AddActor(coneActor) def quit(): root.destroy() button = Button(text="Quit",command=quit) button.pack(expand='true',fill='x') # start up the event loop root.mainloop()
Vtk Pipeline
a. Camera => 你看物體的眼睛視窗
b. Azimuth => 設定 Camera 的角度,也就是跟正交的方位角夾角,你可以變更看看
c. Viewport: {(.0, .0)~(1.0, 1.0)}
d. 連到 http://horse.nchc.org.tw/IvI3,隨便找一個 Model 看看。
2. _VTK_Process.cxx => ModelToImage()
你可以查閱 mdl2img.h => 可以得到所有的 function 的 prototype
3. 如何去恰當地使用SetViewport函式呢,讓物體如一個cone,可以按照想要呈現的方式(如和視角呈現45度的夾角),出現在2d平面之中,而SetViewport((0.0,0.0)~(1.0,1.0))是指何意義呢
3.0 Object Models
There are two distinct parts to our object design. The first is the graphics model, which is an abstract model for 3D graphics. The second is the visualization model, which is a data-flow model of the visualization process.
3.1 The Graphics Model
The graphics model captures the essential features of a 3D graphics system in a form that is easy to understand and use. The abstraction is based on the movie-making industry, with some influence from current graphical user interface (GUI) windowing systems.There are nine basic objects in the model.
- Render Master - coordinates device-independent methods and creates rendering windows.
- Render Window - manages a window on the display device. One or more renderers draw into a render window to generate a scene (i.e., final image).
- Renderer - coordinates the rendering of lights, cameras, and actors.
- Light - illuminates the actors in a scene.
- Camera - defines the view position, focal point, and other camera characteristics.
- Actor - an object drawn by a renderer in the scene. Actors are defined in terms of mapper, property, and a transform objects.
- Property - represents the rendered attributes of an actor including object color, lighting (e.g., specular, ambient, diffuse), texture map, drawing style (e.g., wireframe or shaded); and shading style.
- Mapper - represents the geometric definition of an actor and maps the object through a lookup table. More than one actor may refer to the same mapper.
- Transform - an object that consists of a 4x4 transformation matrix and methods to modify the matrix. It specifies the position and orientation of actors, cameras, and lights.
vtk pipeline:
1. Data->Filter->Actor->Camera->Renderer->Lights->Render window
2. Mapper->Properties
3. The basic setup procedure in most VTK programs is: source -> mapper -> actor -> renderer -> renderwindow.
2. VTK uses a command/observer design pattern. That is, observers watch for particular events that any vtkObject (or subclass) may invoke on itself. For example, the vtkRenderer invokes a "StartEvent" as it begins to render. Here we add an observer that invokes a command when this event is observed.
rendrr.AddObserver("StartEvent", myCallback)
3. Specify a vtkViewport object to be used to transform the vtkPolyData points into 2D coordinates. By default (no vtkViewport specified), the point coordinates are generated by ignoring the z values. If a viewport is defined, then the points are transformed into viewport coordinates.
DICOMImageReader Pipeline
Sylvain JAUME sylvain@mit.edu
找到關於DICOM ImageReader的用法,將其中隱藏的Pipeline,用簡單的方式表示出來。
vtkDICOMImageReader *DICOMImageReader = vtkDICOMImageReader::New();
DICOMImageReader->SetDirectoryName( inputDirectory );
DICOMImageReader->Update();
int inputDimensions[3] = { -1, -1, -1 };
int inputExtent[6] = { 0, -1, 0, -1, 0, -1 };
DICOMImageReader->GetOutput()->GetDimensions( inputDimensions );
DICOMImageReader->GetOutput()->GetExtent( inputExtent );
vtkMetaImageWriter *metaImageWriter = vtkMetaImageWriter::New();
metaImageWriter->SetInput( DICOMImageReader->GetOutput() );
DICOMImageReader->Delete();
metaImageWriter->SetFileName( outputFileName );
metaImageWriter->Write();
ASSERT_MACRO( metaImageWriter->GetErrorCode() == 0 );
metaImageWriter->Delete();
END_MACRO( argv[0] );
接下來就是瞭解API的真正意義及熟悉其用法。
VTK installation
vtk是個使用python的很好範例,以下文章就介紹如何安裝、設定、所需套件...。
samuel@pika046:~/VtkBuild$ ~/CMake/bin/cmake -i ~/VTK Wizard mode libgl1-mesa-dev -- OPENGL_gl_LIBRARY:FILEPATH=/usr/lib/libGL.so OPENGL_INCLUDE_DIR:PATH=/usr/include(/usr/include/GL/gl.h) X11/StringDefs.h: file missing libice-dev libsm-dev libxt-dev
http://www.cmake.org/HTML/Download.html cvs -d :pserver:anonymous@www.cmake.org:/cvsroot/CMake login (respond with password cmake) cvs -d :pserver:anonymous@www.cmake.org:/cvsroot/CMake co CMake http://public.kitware.com/VTK/get-software.php cvs -d :pserver:anonymous@public.kitware.com:/cvsroot/VTK login (respond with password vtk) cvs -d :pserver:anonymous@public.kitware.com:/cvsroot/VTK checkout VTK cvs -d :pserver:anonymous@public.kitware.com:/cvsroot/VTKData login (respond with password vtk) cvs -d :pserver:anonymous@public.kitware.com:/cvsroot/VTKData co VTKDatainstall libc6-dev kernel-header install g++ install make
- 下載cmake
- cvs co VTK,VTKData
- 使用out-tree make
因為vtk需要cmake,但建議使用ccmake,用out-tree的方式作出Makefile比較方便
mkdir VtkBuild cd VtkBuild ccmake /home/samuel/VTK
在ccmake中,CMakeCache.txt的設定:
BUILD_EXAMPLES ON BUILD_SHARED_LIBS ON CMAKE_BACKWARDS_COMPATIBILITY 2.0 CMAKE_BUILD_TYPE CMAKE_INSTALL_PREFIX /usr/local PYTHON_INCLUDE_PATH /usr/include/python2.4 PYTHON_LIBRARY /usr/lib/python2.4/config/libpython2.4.a TCL_INCLUDE_PATH /usr/include/tcl8.4/ TCL_LIBRARY /usr/lib/libtcl8.4.so.0 TK_INCLUDE_PATH /usr/include/tcl8.4/ TK_LIBRARY /usr/lib/libtk8.4.so.0 VTK_DATA_ROOT /home/samuel/VTKData VTK_USE_CG_SHADERS OFF VTK_USE_GLSL_SHADERS OFF VTK_USE_PARALLEL OFF VTK_USE_RENDERING ON VTK_USE_RPATH ON VTK_WRAP_JAVA OFF VTK_WRAP_PYTHON ON VTK_WRAP_TCL ON
注意選項的部份,因為Vtk可以支援python-binding,所以 PYTHON_INCLUDE_PATH, PYTHON_LIBRARY的設定值,如果開始的時候,ccmake無法自動找到,可以手動寫入CMakeCache.txt。
如果系統回應沒有找到tcl, tk,可先安裝tck8.4-dev, tk8.4-dev,然後將TCL(TK)_INCLUDE_PATH指定到/usr/include/tcl8.4/。
3d Programming basics
FOREWORD
Before I started writing the OpenGL programming tutorials I decided to take some time and write an additional tutorial that will go more into the basics and how everything fits together in the 3D world. This could be really useful for beginners. Not a lot of people would like a bunch of code dropped on them without any general explanation of what's going on. So, basically this is what this tutorial is all about: making things clear. If you know what perspective and back-face culling is you would probably want to skip this part but I would still suggest reading it because not only does it contain 3D fundamentals but also addresses some information on how this series of OpenGL tutorials are structured and why it is here and more importantly some really basic OpenGL-related information like general naming convention that OpenGL uses for functions and variables and how to read them. I just wanted to add that writing tutorials makes me learn something as well since I'm self-taught and pretend to be not the school-friendly kind of person. So if you notice any mistakes or errors in code or anywhere else let me know. This is going to be a really quick overview of 3D terms and techniques but I will try to add things to it when I have more time. At this moment it is already pretty long though.
THE MORE YOU GET, THE BETTER... NOT SO
OpenGL is an interesting topic and there are many tutorials written on it. So why bother writing another one? The reason is simply I felt the lack of DETAILED information in those tutorials, the kind you would see in a book, only this time you don't have to buy anything. Seems like people just want to write tutorials just for the sake of having a tutorial section on their site. Writing tutorials (or any other documentation or books) is hard and time-consuming. Not everyone has the opportunity and patience to write a few solid tutorials. I also feel that there's a great deal of demand today for 3D-tutorials (be it D3D or GL). I didn't pick OpenGL because it was "better" or more portable than D3D but because this is what I know about. In the future I will try to document D3D as well but only if there's enough demand (send me an e-mail to let me know you're interested in D3D if you are). Well, I think it's time to actually start writing something useful. And here it goes.
3D BASICS EVERYONE SHOULD KNOW BEFORE TOUCHING OPENGL
In this part I will cover 3D graphics in general and most of the following topics don't have to be constrained to OpenGL alone. So what is exactly 3D and how can it be represented to the viewer on the computer screen? To describe the idea behind rendering 3D objects on the screen it's best for me to use a 3D object. Lets examine the following image of a wire-framed 3D cube.
You see, for your brain 3D objects are so common that by looking at this picture you will instantly recognize a 3D shape even though it's nothing more than a collection of 12 2D lines connected to each other with specific angles between them. And yet it's hard to think of this image as being "flat". 3D graphics on the visual level is (mostly) all about rendering objects to the screen. The question is what are the main requirements to render an object so that you will be able to correctly recognize it as a 3D object and not just a collection of lines or perhaps polygons? Obviously, the idea is to render objects to the screen the way you would see them in real life. And how do you see objects in real life? This is where the meaning of perspective comes from. In the pre-computer ages artists had used the same techniques for painting their masterpieces that today's 3D software is using for creating 3D images. The point behind perspective is that all objects farther away from the viewer look smaller than objects closer to the viewer, and ultimately they disappear into the vanishing point.
This is true for most 3D graphics applications. Now lets take a look at the OpenGL coordinate system we will be using. It is so-called 3D Cartesian coordinate system. As you can see, additionally to the x and y-axis known in 2D graphics we have the z-axis which extends into negative space from the center of the screen from the viewer and into positive space from the center of the screen towards the viewer. This image visually mimics what I've just said.
投影是將原座標(x,y,z)經過 1. scale: ViewingDistance/z 2. translation: 移到(width/2, height/2) 如果z為零(在xy平面上),則會投影到新xy平面上的無限遠處 如果z為無限遠(在xy線上),則會投影到原xy處 |z|<1,會投影到大於rho=sqrt(x^2+y^2) |z|>1,會投影到小於rho=sqrt(x^2+y^2) 就如同以|z|=1的單位圓,作conformal mapping,內映射到外,外映射內,origin到無限遠,無限遠映射至圓點。
PROJECTION
POINT3D point = { 5, -3, 2 };
int x2d = HalfWidth + point.x * ViewingDistance / point.z;
int y2d = HalfHeight + point.y * ViewingDistance / point.z;
Pixel(x2d, y2d);
X2D <--- HalfWidth + OrigX*Ratio
Y2D <--- HalfHeight + OrigY*Ratio
Ratio=ViewDist/OrigZ
As we take little steps towards the end of this tutorial, I think it's the right time to explain projection right here. There are two types of projections actually. Perspective Projection and Orthographic Projection (described shortly). First I want to talk about Perspective projection because I've already explained perspective. Objects that you're going to render will be actually what we might call "projected" to the screen. What I mean by projection is the actual conversion from the 3D coordinates (usually vertices of objects) to the 2D flat surface of the screen. Since the computer screen has only two dimensions, we, somehow, have to display the 3D objects on the 2D screen. And that's precisely what projection does for us. Perspective projection works as follows. I will take a single pixel as an example. Imagine we have a pixel with coordinates of (5, -3, 2) on the x y and z-axis respectively and we want to project it to the screen. We do it with the following formula. Assume we have a structure POINT3D containing the coordinates of the point initialized with the mentioned values for this example.
// initialize point
POINT3D point = { 5, -3, 2 };
// find the right position on the screen in 2D coordinates
int x2d = HALFWIDTH + point.x * ViewingDistance / point.z;
int y2d = HALFHEIGHT + point.y * ViewingDistance / point.z;
// project the 3D point to the screen
Pixel(x2d, y2d);
Let's take the formula apart. As you already know, usually in 2D all coordinates are based on the 4th quadrant in 2D Cartesian Coordinate system. That means that (0, 0) is at the upper left corner of the screen. In 3D graphics, we want our view, or the camera to be exact, (camera is explained a little further into this tutorial) to be located as in the following image, so that we're always looking straight down the negative space of the z-axis.
As you can see, if we had a 3D point at (0, 0, -16) it would be exactly in the center of the screen. A little modification is required here. Take a look at the projection formula again. There we're adding halves of the screen resolution first to center all results. We're in fact translating the point from (0, 0) to (halfwidth, halfheight) on the screen. If we're in 640x480 resolution we would be translating the point to (320, 240). Take the constant ViewingDistance out of the equation for a second. And you will realize that the second part of the formula is just the relationship between "X and Z" for x2d and "Y and Z" for y2d. This is the most important idea behind perspective-projected objects. As you recall objects that appear farther from the viewer are smaller, and this is the exact relationship between the 2D points and the perspective, which is achieved by division of the both horizontal and vertical coordinates by the amount of how far away the object is. However there is a problem. By merely dividing the x and y coordinates by depth (the z coordinate) we will only get the ratio between the depth and vertical/horizontal position of the pixel. And what we need is how they are actually related to the Viewing Distance and Viewing Volume. These two terms are explained below.
The Viewing Volume is the space between the near clipping plane (or the viewing plane) and the far clipping plane as seen on the second picture below. So, back to our equation for a second, we simply multiply x and y by ViewingDistance to get the right relationship between the Viewing Volume and the X and Y coordinates. Simple as that. Viewing Distance is closely related to the Viewing Volume. The longer the viewing distance, the narrower is the line of sight and therefore the smaller the viewing volume. Well, the good news is that we don't have to worry about all of this in OpenGL since everything is done behind the scenes, however you still need to understand these terms to understand why images appear the way they appear on the screen, and I just wanted to explain the basics of perspective projection. The above formula could be used in a software 3D rendered but we're not interested in that at this moment.
As a conclusion to this paragraph, here's how a whole object (as opposed to the pixel in previous example) would be projected onto the screen in theory. At the upper right corner of this image there is a real object (cube) in space. I tried to make the projected version of the cube as it appears on the screen as close as possible to what it would be like, but I'm sure this is wrong. Just keep in mind that the whole object is projected on the flat screen pixel by pixel (and polygon by polygon on a higher scale).
I talked about Viewing Volume and how it is related to the perspective projection equation. But what is Viewing Volume? The Viewing Volume is also known as the Clipping volume or the Frustum. Here's the visual representation of the viewing volume.
There are two planes, the viewing plane and the far clipping plane. The viewing plane is actually the screen and the far plan indicates how far you can "see", whatever is behind the far clipping plane will not be visible. The viewing volume is the space between those two planes. The viewing volume is sometimes called clipping volume because you usually want to clip your polygons against it.
ORTHOGRAPHIC PROJECTION
As I mentioned before there is another type of projection, which is the Orthographic Projection. This type of projection cannot be used for games or real-time applications with desirable results since it ignores the z-axis coordinate. In other words, if you draw a bunch of trees close and far away from the view, they will all appear the same size. Orthographic projection is used with technical design software and OpenGL supports it as well. In this series of OpenGL tutorials we will be always using the perspective projection.
THE CAMERA
At this point I should explain what camera is. The camera is always located at the origin of the virtual "view". Note however, that it is NOT NECESSARY located at the origin of the COORDINATE SYSTEM since you can move the camera around and transform it to anywhere in the world. The camera and the view are basically the same things. Camera is only mentioned to represent a virtual viewing point but there is actually no physical camera anywhere around. I already talked about it but it is important to understand that there is some space between the origin of the camera and the viewing plane. As you saw in the previous image. That space is the VIEWING DISTANCE.
If you look straight ahead for example you are considered to be looking down the camera's z-axis into the negative z space, in 3D terms. Camera rotation is possible around all 3 axis as you would expect and is made even easier for you by OpenGL. Camera rotation is responsible for moving the view, and it's what happens when you move your virtual head around with the mouse or arrow keys in a 3D-FPS shooter. Lets examine the camera a little closer. Camera, as any other object in space has 2 coordinate systems. The two are the Local Coordinate System and the World Coordinate System. The local coordinates are the camera's rotation degrees on all of it's LOCAL xyz-axis and actual displacement from the local coordinate system. The world coordinates specify the camera's position in the world. For example, when you walk around in a 3D FPS-shooter kind of game you are actually moving the camera's world coordinates and when you look around you change the camera's local coordinates. It is possible to use the local camera coordinates for moving also, by translating them to the new location but only BEFORE rotation is performed because rotation is also done in local coordinates around (0,0,0) and if you move the camera before rotating to say (0, 5, 0) it will not rotate correctly as its center will be displaced and taken into account during rotation. Remember this rule: always rotate around the local center (0,0,0). If this sounds confusing, don't worry. It will all settle down the more you study and actually code in OpenGL, if you haven't already. Here's how the camera's coordinates are transformed.
If you understand this so far, that's good. Now, let's move on to object rotation basics. This is exactly the same as demonstrated on the camera rotation part of the above image. The only difference is that we're not viewing the world FROM that object, but are in fact OBSERVING that object from the current camera position. This is the way an object is rotated around all of the 3 possible axis. When we get down to actually doing it in the following tutorials, I will make it more clear, so don't worry if you don't get something at this moment.
Just the same way it is with the camera, the objects also have two coordinate systems and as you might have guessed already, the objects are positioned according to the LOCAL and WORLD coordinate systems. The local coordinates are usually used for rotating the object and the world coordinates are used for positioning the object in the world or, say, in a 3D level.
As you add objects and static polygons (e.g. walls, terrain, etc.) to your 3D world you want to clip all of the polygons that are not located in the camera's viewing volume. You also want to clip off parts of the polygons that are on the edge of the view volume against the bounding box of the screen. The former is provided for us by OpenGL. Another issue associated with drawing polygons is that you don't want to draw the back faces (or sides) of the polygons when they are facing the camera. Imagine a textured polygon which is rotated by 180 degrees so its "back" is facing us. Let's also assume that that polygon is a part of a bigger structure, a wall for example. Usually you will never want to see what's "behind" the wall. Have you ever wanted to see what's behind your room's wallpaper? I surely hope not. So the point is, if you rotate a textured polygon, its coordinates are reversed judged against the camera view and you never want to see that anyway and that space is usually covered with another side of the wall, so why bother drawing it? That's right, there is no reason to and a technique called Back-face Culling comes to our help. Back-face culling works this way: it calculates the normal of the polygon (a normal is a perpendicular pointing straight out of the polygon at a 90-deg angle, and is very common in 3D graphics) and if it is pointing in the same direction as the camera, the surface of that polygon is not rendered as illustrated in this image.
This technique was so common among the older 3D engines that developers of OpenGL decided to take it into consideration and do all the dirty job for us in hardware to speed up the pipeline which is in fact the next topic of this tutorial.
THE 3D GRAPHICS PIPELINE
In case you're all wondering what's up with all these pipelines everyone is talking about, a pipeline is actually nothing more than an order of relatively distinctive operations. At this stage it is early to talk about what the operations are. Depending on what kind of program you're writing, be it a 3D FPS engine or a flight simulator, the pipeline might actually change into different forms that will work the best for a given task. And therefore I'm not going to describe it here in detail, but I will as soon as we get some tasks to do in further tutorials.
OPENGL VARIABLE AND FUNCTION NAMING CONVENTIONS
In conclusion I want to say a few words on this topic. OpenGL was made for use with various environments, not just Windows. In this section I explain naming conventions for both OpenGL functions and variables. Although you don't have to use OpenGL-defined types I still feel obligated to describe them here so that anyone who wants their software to be platform-independent understand what this all means. Well, lets see. OpenGL has a number of predefined types. If you never plan being platform-independent it might be the best way to use local C types such as int, float and double. However if that's not the case, OpenGL has definitions that will work on the current system whatever the system is. All you have to do is add GL in front of the standard C types. For example if you want to use a floating number type use GLfloat instead of C's float and if you want to use an int, use GLint. That works for the rest of normal C types as well. If you want to use an unsigned value, just add an "u" between GL and the type like so: GLuint; is an unsigned integer. There is also a GLboolean which is identical to bool in C. GLbitfield is used to define binary fields. A little less obvious type in OpenGL is clamp; its variations are clampf and clampi for floating and integer variables respectively. It is short for ColorR AMPlitude and used for color compositions. There are no types for pointers. Pointers are defined the usual way. For instance this is an array of pointers to int: GLint *i[16];
Each OpenGL function has a neat naming convention and its form is:
To demonstrate this on a real name function I will use the glVertex3f function.
The last two parameters are mostly encountered in the functions that are responsible for drawing primitives. Many other functions are usually used in this form:
glVertex3f(0.0f, 0.0f, 0.0f); | | || | | || | | |+- f means all parameters are floats | | | | | +- 3 is the number of parameters | | | +- Vertex is the name of the function that renders a 3D point (or a vertex) | +- gl specifies the opengl library
AFTERWORD
Well, what can I say, this has been a long read but this isn't even close to the full picture. I tried however to cover most general topics that came to my mind. This should definitely make it easier for beginners to read the rest of tutorials. Hope the illustrations helped you in some way to understand the described topics better. Now, sit tight and wait for the next tutorials which will actually put what's been said in here to action! Feedback and suggestions are welcome.
gdcm -- Library for read/write Dicom
GDCM(Grass root's DiCom)是用來read/write醫學格式的影像檔Dicom,可用cvs方式下載:
samuel@pika046:~$ cvs -d:pserver:anonymous@cvs.creatis.insa-lyon.fr:2402/cvs/public login Logging in to :pserver:anonymous@cvs.creatis.insa-lyon.fr:2402/cvs/public CVS password:(請打入anonymous) samuel@pika046:~$ cvs -d:pserver:anonymous@cvs.creatis.insa-lyon.fr:2402/cvs/public co gdcm samuel@pika046:~$ cvs -d:pserver:anonymous@cvs.creatis.insa-lyon.fr:2402/cvs/public co gdcmData
但是後來也放入了ITK之中。
也有討論區
但是用ccmake安裝時,必須要安裝swig,這樣才能安裝成功。
vtk Class Usage
vtkViewerImage2
Display a 2D image.
vtkImageViewer2 is a convenience class for displaying a 2D image. It packages up the functionality found in vtkRenderWindow, vtkRenderer, vtkImageActor and vtkImageMapToWindowLevelColors into a single easy to use class. This class also creates an image interactor style (vtkInteractorStyleImage) that allows zooming and panning of images, and supports interactive window/level operations on the image. Note that vtkImageViewer2 is simply a wrapper around these classes.
vtkImageViewer2 uses the 3D rendering and texture mapping engine to draw an image on a plane. This allows for rapid rendering, zooming, and panning. The image is placed in the 3D scene at a depth based on the z-coordinate of the particular image slice. Each call to SetSlice() changes the image data (slice) displayed AND changes the depth of the displayed slice in the 3D scene. This can be controlled by the AutoAdjustCameraClippingRange ivar of the InteractorStyle member.
It is possible to mix images and geometry, using the methods:
viewer->SetInput( myImage ); viewer->GetRenderer()->AddActor( myActor );
This can be used to annotate an image with a PolyData of "edges" or or highlight sections of an image or display a 3D isosurface with a slice from the volume, etc. Any portions of your geometry that are in front of the displayed slice will be visible; any portions of your geometry that are behind the displayed slice will be obscured. A more general framework (with respect to viewing direction) for achieving this effect is provided by the vtkImagePlaneWidget .
Note that pressing 'r' will reset the window/level and pressing shift+'r' or control+'r' will reset the camera.
BYU Files
BYU是種Movie的檔案格式,更精確的描述,是關於Movie.BYU的surface geometry的檔案格式。它包含4組資料型態:
第一組的Data檔頭,包含下列4個欄位:
PART_NUM, VERTEX_NUM, POLY_NUM, EDGE_NUM
詳細的說明可參考