Commit 7457d429 authored by Pierre Paleo's avatar Pierre Paleo

Test FBP

parent 926e1699
GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
document.
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.
\ No newline at end of file
#!/usr/bin/env python
# coding: utf-8
# /*##########################################################################
#
# Copyright (c) 2018 European Synchrotron Radiation Facility
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# ###########################################################################*/
__version__ = "0.1.0"
import numpy as np
from ..utils import updiv
import pycuda.driver as cuda
import pycuda.gpuarray as garray
from pycuda.compiler import SourceModule
class CudaKernel(object):
"""
Helper class that wraps CUDA kernel through pycuda SourceModule.
Parameters
-----------
kernel_name: str
Name of the CUDA kernel.
filename: str, optional
Path to the file name containing kernels definitions
src: str, optional
Source code of kernels definitions
signature: str, optional
Signature of kernel function. If provided, pycuda will not guess the types
of kernel arguments, making the calls slightly faster.
For example, a function acting on two pointers, an integer and a float32
has the signature "PPif".
texrefs: list, optional
List of texture references, if any
automation_params: dict, optional
Automation parameters, see below
sourcemodule_kwargs: optional
Extra arguments to provide to pycuda.compiler.SourceModule(),
Automation parameters
----------------------
automation_params is a dictionary with the following keys and default values.
guess_block: bool (True)
If block is not specified during calls, choose a block size based on
the size/dimensions of the first array.
Mind that it is unlikely to be the optimal choice.
guess_grid: bool (True):
If the grid size is not specified during calls, choose a grid size
based on the size of the first array.
follow_gpuarr_ptr: bool (True)
specify gpuarray.gpudata for all GPUArrays. Otherwise, raise an error.
"""
def __init__(
self,
kernel_name,
filename=None,
src=None,
signature=None,
texrefs=[],
automation_params=None,
**sourcemodule_kwargs
):
self.check_filename_src(filename, src)
self.set_automation_params(automation_params)
self.compile_kernel_source(kernel_name, sourcemodule_kwargs)
self.prepare(signature, texrefs)
def check_filename_src(self, filename, src):
err_msg = "Please provide either filename or src"
if filename is None and src is None:
raise ValueError(err_msg)
if filename is not None and src is not None:
raise ValueError(err_msg)
if filename is not None:
with open(filename) as fid:
src = fid.read()
self.filename = filename
self.src = src
def set_automation_params(self, automation_params):
self.automation_params = {
"guess_block": True,
"guess_grid": True,
"follow_gpuarr_ptr": True,
}
automation_params = automation_params or {}
self.automation_params.update(automation_params)
def compile_kernel_source(self, kernel_name, sourcemodule_kwargs):
self.sourcemodule_kwargs = sourcemodule_kwargs
self.kernel_name = kernel_name
self.module = SourceModule(self.src, **self.sourcemodule_kwargs)
self.func = self.module.get_function(kernel_name)
def prepare(self, kernel_signature, texrefs):
self.prepared = False
self.kernel_signature = kernel_signature
self.texrefs = texrefs
if kernel_signature is not None:
self.func.prepare(self.kernel_signature, texrefs=texrefs)
self.prepared = True
@staticmethod
def guess_grid_size(shape, block_size):
# python: (z, y, x) -> cuda: (x, y, z)
res = tuple(map(lambda x : updiv(x[0], x[1]), zip(shape[::-1], block_size)))
if len(res) == 2:
res += (1,)
return res
@staticmethod
def guess_block_size(shape):
"""
Guess a block size based on the shape of an array.
"""
ndim = len(shape)
if ndim == 1:
return (128, 1, 1)
if ndim == 2:
return (32, 32, 1)
else:
return (16, 8, 8)
def get_block_grid(self, *args, **kwargs):
block = None
grid = None
if ("block" not in kwargs) or (kwargs["block"] is None):
if self.automation_params["guess_block"]:
block = self.guess_block_size(args[0].shape)
else:
raise ValueError("Please provide block size")
else:
block = kwargs["block"]
if ("grid" not in kwargs) or (kwargs["grid"] is None):
if self.automation_params["guess_grid"]:
grid = self.guess_grid_size(args[0].shape, block)
else:
raise ValueError("Please provide block grid")
else:
grid = kwargs["grid"]
self.last_block_size = block
self.last_grid_size = grid
return block, grid
def follow_gpu_arr(self, args):
args = list(args)
# Replace GPUArray with GPUArray.gpudata
for i, arg in enumerate(args):
if isinstance(arg, garray.GPUArray):
args[i] = arg.gpudata
return tuple(args)
def get_last_kernel_time(self):
"""
Return the execution time (in seconds) of the last called kernel.
The last called kernel should have been called with time_kernel=True.
"""
if self.last_kernel_time is not None:
return self.last_kernel_time()
else:
return None
def call(self, *args, **kwargs):
block, grid = self.get_block_grid(*args, **kwargs)
args = self.follow_gpu_arr(args)
if self.prepared:
func_call = self.func.prepared_call
if "time_kernel" in kwargs:
func_call = self.func.prepared_timed_call
kwargs.pop("time_kernel")
if "block" in kwargs:
kwargs.pop("block")
if "grid" in kwargs:
kwargs.pop("grid")
t = func_call(grid, block, *args, **kwargs)
else:
kwargs["block"] = block
kwargs["grid"] = grid
t = self.func(*args, **kwargs)
#~ return t
# TODO return event like in OpenCL ?
self.last_kernel_time = t # list ?
__call__ = call
from .utils import get_cuda_context
# WIP
# TODO add logger ? inherit from Processing class with logger ?
class CudaProcessing(object):
def __init__(self, device_id=None, cleanup_at_exit=True):
self.ctx = get_cuda_context(
device_id=device_id,
cleanup_at_exit=cleanup_at_exit
)
#include <pycuda-complex.hpp>
typedef pycuda::complex<float> complex;
// arr2D *= arr1D (line by line, i.e along fast dim)
__global__ void inplace_complex_mul_2Dby1D(complex* arr2D, complex* arr1D, int width, int height) {
int x = blockDim.x * blockIdx.x + threadIdx.x;
int y = blockDim.y * blockIdx.y + threadIdx.y;
if ((x >= width) || (y >= height)) return;
// This does not seem to work
// Use cuCmulf of cuComplex.h ?
//~ arr2D[y*width + x] *= arr1D[x];
int i = y*width + x;
complex a = arr2D[i];
complex b = arr1D[x];
arr2D[i]._M_re = a._M_re * b._M_re - a._M_im * b._M_im;
arr2D[i]._M_im = a._M_im * b._M_re + a._M_re * b._M_im;
}
// arr3D *= arr1D (along fast dim)
__global__ void inplace_complex_mul_3Dby1D(complex* arr3D, complex* arr1D, int width, int height, int depth) {
int x = blockDim.x * blockIdx.x + threadIdx.x;
int y = blockDim.y * blockIdx.y + threadIdx.y;
int z = blockDim.z * blockIdx.z + threadIdx.z;
if ((x >= width) || (y >= height) || (z >= depth)) return;
// This does not seem to work
// Use cuCmulf of cuComplex.h ?
//~ arr3D[(z*height + y)*width + x] *= arr1D[x];
int i = (z*height + y)*width + x;
complex a = arr3D[i];
complex b = arr1D[x];
arr3D[i]._M_re = a._M_re * b._M_re - a._M_im * b._M_im;
arr3D[i]._M_im = a._M_im * b._M_re + a._M_re * b._M_im;
}
#define BLOCK_SIZE 512
#define BLOCK_SIZ2 BLOCK_SIZE*2
texture<float, 2, cudaReadModeElementType> tex_projections;
texture<float, 2, cudaReadModeElementType> tex_msin_lut;
texture<float, 2, cudaReadModeElementType> tex_cos_lut;
// Backproject one slice
// One thread handles up to 4 pixels in the output slice
// the case num_projs > 1024 has to be included.
__global__ void backproj(
float* d_slice,
int num_projs,
int num_bins,
float axis_position,
int n_x,
int n_y,
float* d_cos,
float* d_msin,
float scale_factor
)
{
int x = blockDim.x * blockIdx.x + threadIdx.x;
int y = blockDim.y * blockIdx.y + threadIdx.y;
if ((x >= n_x/2) || (y >= n_y/2)) return;
int Gx = blockDim.x * gridDim.x;
int Gy = blockDim.y * gridDim.y;
// (xr, yr) (xrp, yr)
// (xr, yrp) (xrp, yrp)
float xr = x - axis_position, yr = y - axis_position;
float xrp = xr + Gx, yrp = yr + Gy;
/*volatile*/ __shared__ float shared_mem[2048];
/*volatile*/ float* s_cos = shared_mem;
/*volatile*/ float* s_msin = shared_mem + 1024;
// Fetch "blockDim.x" values to shared memory
int tid = threadIdx.y * blockDim.x + threadIdx.x;
s_cos[tid] = d_cos[tid];
s_msin[tid] = d_msin[tid];
int next_fetch = BLOCK_SIZE;
__syncthreads();
// ------------
float costheta, msintheta;
float h1, h2, h3, h4;
float sum1 = 0.0f, sum2 = 0.0f, sum3 = 0.0f, sum4 = 0.0f;
int proj = 0, proj_loc = 0;
for (proj = 0, proj_loc = 0; proj < num_projs; proj++, proj_loc++) {
/*
if (proj == next_fetch) {
if (tid + next_fetch < num_projs) {
s_cos[tid] = d_cos[next_fetch + tid];
s_msin[tid] = d_msin[next_fetch + tid];
__syncthreads();
}
next_fetch += BLOCK_SIZE;
proj_loc = 0;
}
*/
costheta = s_cos[proj_loc];
msintheta = s_msin[proj_loc];
float c1 = fmaf(costheta, xr, axis_position); // cos(theta)*xr + axis_pos
float c2 = fmaf(costheta, xrp, axis_position); // cos(theta)*(xr + Gx) + axis_pos
float s1 = fmaf(msintheta, yr, 0.0f); // -sin(theta)*yr
float s2 = fmaf(msintheta, yrp, 0.0f); // -sin(theta)*(yr + Gy)
h1 = c1 + s1;
h2 = c2 + s1;
h3 = c1 + s2;
h4 = c2 + s2;
//~ if (h >= 0 && h < num_bins)
if (h1 >= 0 && h1 < num_bins) sum1 += tex2D(tex_projections, h1 + 0.5f, proj + 0.5f);
if (h2 >= 0 && h2 < num_bins) sum2 += tex2D(tex_projections, h2 + 0.5f, proj + 0.5f);
if (h3 >= 0 && h3 < num_bins) sum3 += tex2D(tex_projections, h3 + 0.5f, proj + 0.5f);
if (h4 >= 0 && h4 < num_bins) sum4 += tex2D(tex_projections, h4 + 0.5f, proj + 0.5f);
}
d_slice[y*(2*Gx) + x] = sum1 * scale_factor;
d_slice[y*(2*Gx) + Gx + x] = sum2 * scale_factor;
d_slice[(y+Gy)*(2*Gx) + x] = sum3 * scale_factor;
d_slice[(y+Gy)*(2*Gx) + Gx + x] = sum4 * scale_factor;
}
/**
*
* Old stuff / tinkering
*
*
**/
/*
__global__ void backproj_cust(
float* d_slice,
int num_projs,
int num_bins,
float axis_position,
int n_x,
int n_y,
float scale_factor
)
{
int x = blockDim.x * blockIdx.x + threadIdx.x;
int y = blockDim.y * blockIdx.y + threadIdx.y;
if ((x >= n_x) || (y >= n_y)) return;
int xr = (x - axis_position), yr = (y - axis_position);
float sum = 0.0f;
for (int proj = 0; proj < num_projs; proj++) {
// h = axis_pos + x*cos - y*sin
// = (cos, -sin) .* (x, y) + axis_pos
#if USE_F2 == 1
float2 cos_msin = tex2D(tex_sincos_lut, proj + 0.5f, 0.5f);
float h = cos_msin.x * xr + cos_msin.y * yr + axis_position; // TODO dot() ?
//~ float h = __cosf(3.141592f/500*proj) * xr - __sinf(3.141592f/500*proj) * yr + axis_position; // fastest (3x with __intrinsics)
//~ float h = xr*0.1f - yr*0.2f + axis_position;
#else
//~ float pcos = tex2D(tex_cos_lut, proj + 0.5f, 0.5f);
//~ float pmsin = tex2D(tex_msin_lut, proj + 0.5f, 0.5f);
//~ float h = pcos * xr + pmsin * yr + axis_position; // TODO dot() ?
float h = cosf(3.141592f/500*proj) * xr - sinf(3.141592f/500*proj) * yr + axis_position;
#endif
// if(h >= 0 && h < num_bins)
sum += tex2D(tex_projections, h + 0.5f, proj + 0.5f);
}
d_slice[y*num_bins + x] = sum * scale_factor;
}
__global__ void backproj_cust2(
float* d_slice,
int num_projs,
int num_bins,
float axis_position,
int n_x,
int n_y,
float* d_cos,
float* d_msin,
float scale_factor
)
{
int x = blockDim.x * blockIdx.x + threadIdx.x;
int y = blockDim.y * blockIdx.y + threadIdx.y;
if ((x >= n_x) || (y >= n_y)) return;
float xr = (x - axis_position), yr = (y - axis_position);
volatile __shared__ float shared_mem[BLOCK_SIZ2];
volatile float* s_cos = shared_mem;
volatile float* s_msin = shared_mem + BLOCK_SIZE;
// Fetch "blockDim.x" values to shared memory
int tid = threadIdx.x;
s_cos[tid] = d_cos[tid];
s_msin[tid] = d_msin[tid];
//
//
int next_fetch = BLOCK_SIZE;
__syncthreads();
// ------------
float sum = 0.0f;
int proj = 0, proj_loc = 0;
for (proj = 0, proj_loc = 0; proj < num_projs; proj++, proj_loc++) {
//~ if (proj == next_fetch) {
//~ if (tid + next_fetch < num_projs) {
//~ s_cos[tid] = d_cos[next_fetch + tid];
//~ s_msin[tid] = d_msin[next_fetch + tid];
//~ __syncthreads();
//~ }
//~ next_fetch += BLOCK_SIZE;
//~ proj_loc = 0;
//~ }
// h = axis_pos + x*cos - y*sin
// = (cos, -sin) .* (x, y) + axis_pos
//~ float h = s_cos[proj_loc] * xr + s_msin[proj_loc] * yr + axis_position; // TODO dot() ?
float h =
fmaf(s_cos[proj_loc], xr, axis_position) +
fmaf(s_msin[proj_loc], yr, 0.0f);
//~ if (h >= 0 && h < num_bins)
sum += tex2D(tex_projections, h + 0.5f, proj + 0.5f);