Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
Sluurp
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package Registry
Container Registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
tomotools
Sluurp
Commits
e06896db
Commit
e06896db
authored
2 weeks ago
by
payno
Browse files
Options
Downloads
Patches
Plain Diff
SBatchScriptJob: create a directory dedicated to cupy (and make sure it exists)
parent
33a3d18e
No related branches found
No related tags found
1 merge request
!18
SBatchScriptJob: create a directory dedicated to cupy (and make sure it exists)
Pipeline
#229767
passed
2 weeks ago
Stage: style
Stage: test
Changes
1
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
src/sluurp/job.py
+14
-6
14 additions, 6 deletions
src/sluurp/job.py
with
14 additions
and
6 deletions
src/sluurp/job.py
+
14
−
6
View file @
e06896db
...
...
@@ -164,16 +164,23 @@ class SBatchScriptJob(ScriptJob):
self
.
_slurm_config
=
slurm_config
if
slurm_config
is
not
None
else
{}
self
.
_sbatch_extra_params
=
self
.
_slurm_config
.
pop
(
"
sbatch_extra_params
"
,
{})
self
.
_pycuda_cache_dir
=
os
.
path
.
join
(
os
.
path
.
dirname
(
self
.
script_path
),
f
"
.pycuda_cache_dir_
{
uuid4
()
}
"
)
# as we can have some incoherent home directory we need for some computer to define the pycuda cache directory (for iccbm181 for example)
# handling pycuda and cupy cache directories.
# as we can have some incoherent home directory we need for some computer to define the pycuda and cupy cache directory (for iccbm181 for example)
# uuid4: make sure it is unique each time. Else we can get conflict with different scripts
# using the same pycuda dir. And the first one ending it will delete it
# and the second script will not be able to process...
self
.
_pycuda_cache_dir
=
os
.
path
.
join
(
os
.
path
.
dirname
(
self
.
script_path
),
f
"
.pycuda_cache_dir_
{
uuid4
()
}
"
)
self
.
_script
.
insert
(
0
,
f
"
mkdir -p
'
{
self
.
_pycuda_cache_dir
}
'"
)
self
.
_script
.
insert
(
1
,
f
"
export PYCUDA_CACHE_DIR=
'
{
self
.
_pycuda_cache_dir
}
'"
)
self
.
_cupy_cache_dir
=
os
.
path
.
join
(
os
.
path
.
dirname
(
self
.
script_path
),
f
"
.cupy_cache_dir_
{
uuid4
()
}
"
)
self
.
_script
.
insert
(
0
,
f
"
mkdir -p
'
{
self
.
_cupy_cache_dir
}
'"
)
self
.
_script
.
insert
(
1
,
f
"
export CUPY_CACHE_DIR=
'
{
self
.
_cupy_cache_dir
}
'"
)
def
set_status
(
self
,
status
):
self
.
_status
=
status
...
...
@@ -248,8 +255,9 @@ class SBatchScriptJob(ScriptJob):
file_object
.
write
(
pre_processing_line
+
"
\n
"
)
def
_do_job_artefacts_cleaning
(
self
):
if
os
.
path
.
exists
(
self
.
_pycuda_cache_dir
):
shutil
.
rmtree
(
self
.
_pycuda_cache_dir
,
ignore_errors
=
True
)
for
folder
in
(
self
.
_pycuda_cache_dir
,
self
.
_cupy_cache_dir
):
if
os
.
path
.
exists
(
folder
):
shutil
.
rmtree
(
folder
,
ignore_errors
=
True
)
def
collect_logs
(
self
):
try
:
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment