Command Reference#
cp#
Options for this command are client-specific. Refer to the relevant client usage as shown below.
usage: cp [-options] <source> [<target>]
Source and target paths can be one of the following formats:
file://<local file on disk>
alien://<path in the Grid catalogue>
or without a prefix, in which case it's treated as a Grid path
options:
-g : treat the Grid path as a GUID
-S <SEs|QoS:count> : Storage target specifications for uploading, default is 'disk:2'
-t : no target needed, create a local temporary file instead, print out the local path at the end
-silent : execute command silently
-w : wait for all replicas to complete upload before returning (default false)
-W : do _not_ wait for all replicas to complete upload, return as soon as the first replica is available
-T : Use this many concurrent download threads (where possible) - default 1
-d : delete local file after a successful upload (i.e. move local to Grid)
-j <job ID> : the job ID that has created the file
-m : queue mirror operations to the missing SEs, in case of partial success. Forces '-w'
-q <SEs|QoS:count> : Queue async transfers to the specified targets
Command format is of the form of (with the strict order of arguments):
cp <options> src dst
or
cp <options> -input input_file
where src|dst are local files if prefixed with file:// or file: or grid files otherwise
and -input argument is a file with >src dst< pairs
after each src,dst can be added comma separated specifiers in the form of: @disk:N,SE1,SE2,!SE3
where disk selects the number of replicas and the following specifiers add (or remove) storage endpoints from the received list
%ALIEN alias have the special meaning of AliEn user home directory
options are the following :
-h : print help
-dryrun : just print the src,dst pairs that would have been transfered without actually doing so
-f : No longer used flag! md5 verification of already present destination is default; disable with -fastcheck
-fastcheck : When already present destination is check for validity, check only size not also md5
-S <aditional streams> : uses num additional parallel streams to do the transfer. (max = 15)
-chunks <nr chunks> : number of chunks that should be requested in parallel
-chunksz <bytes> : chunk size (bytes)
-T <nr_copy_jobs> : number of parralel copy jobs from a set (for recursive copy); defaults to 8 for downloads
-timeout <seconds> : the job will fail if did not finish in this nr of seconds
-retry <times> : retry N times the copy process if failed
-ratethreshold <bytes/s> : fail the job if the speed is lower than specified bytes/s
-noxrdzip: circumvent the XRootD mechanism of zip member copy and download the archive and locally extract the intended member.
N.B.!!! for recursive copy (all files) the same archive will be downloaded for each member.
If there are problems with native XRootD zip mechanism, download only the zip archive and locally extract the contents
For the recursive copy of directories the following options (of the find command) can be used:
-glob <globbing pattern> : this is the usual AliEn globbing format; N.B. this is NOT a REGEX!!! defaults to all "*"
-select <pattern> : select only these files to be copied; N.B. this is a REGEX applied to full path!!!
-name <pattern> : select only these files to be copied; N.B. this is a REGEX applied to a directory or file name!!!
-name <verb>_string : where verb = begin|contain|ends|ext and string is the text selection criteria.
verbs are aditive : -name begin_myf_contain_run1_ends_bla_ext_root
N.B. the text to be filtered cannont have underline <_> within!!!
-parent <parent depth> : in destination use this <parent depth> to add to destination ; defaults to 0
-a : copy also the hidden files .* (for recursive copy)
-j <queue_id> : select only the files created by the job with <queue_id> (for recursive copy)
-l <count> : copy only <count> nr of files (for recursive copy)
-o <offset> : skip first <offset> files found in the src directory (for recursive copy)
Further filtering of the files can be applied with the following options:
-mindepth/-maxdepth N : restrict results to N directories depth relative to the base/searched for directory.
N.B. for in directory globbing (/path1/path2/*.sh : the base directory is /path1/path2)
-minsize/-maxsize N : restrict results to at least/at most N bytes in size
-min-ctime/-max-ctime UNIX_TIME: restrict results to at least/at most this UNIX_TIME (ms, 13 decimals integer)
-user/-group string_name : restrict results to specified user/group
quota#
Options for this command are client-specific. Refer to the relevant client usage as shown below.
cd#
pwd#
No help available for this command
mkdir#
rmdir#
usage: rmdir [<option>] <directory>
options:
--ignore-fail-on-non-empty : ignore each failure that is solely because a directory is non-empty
-p : --parents Remove DIRECTORY and its ancestors. E.g., 'rmdir -p a/b/c' is similar to 'rmdir a/b/c a/b a'.
-v : --verbose output a diagnostic for every directory processed
: --help display this help and exit
: --version output version information and exit
-silent : execute command silently
ls#
find#
usage: find [flags] <path> <pattern>
options:
-a : show hidden .* files
-s : no sorting
-c : print the number of matching files
-x <target LFN> : create the indicated XML collection with the results of the find operation. Use '-' for screen output of the XML content.
-d : return also the directories
-w[h] : long format, optionally human readable file sizes
-j <queueid> : filter files created by a certain job ID
-l <count> : limit the number of returned entries to at most the indicated value
-o <offset> : skip over the first /offset/ results
-r : pattern is a regular expression
-f : return all LFN data as JSON fields (API flag only)
-y : (FOR THE OCDB) return only the biggest version of each file
-S <site name> : Sort the returned list by the distance to the given site
-e <pattern> : Exclude pattern
toXml#
usage: toXml [-i] [-x xml_file_name] [-a] [-l list_from] [lfns]
options:
-i : ignore missing entries, continue even if some paths are not/no longer available
-x : write the XML content directly in this target AliEn file
-a : (requires -x) append to the respective collection
-l : read the list of LFNs from this file, one path per line
usage: toXml [-i] [-x xml_file_name] [-a] [-l list_from] [lfns]
options:
-i : ignore missing entries, continue even if some paths are not/no longer available
-x : write the XML content directly in this target AliEn file
-a : (requires -x) append to the respective collection
-l : read the list of LFNs from this file, one path per line
Additionally the client implements these options:
-local: specify that the target lfns are local files
for -x (output file) and -l (file with lfns) the file: and alien: represent the location of file
the inferred defaults are that the target files and the output files are of the same type
jsh: [alice] > toXml example.file
<?xml version="1.0"?>
<alien>
<collection name="tempCollection">
<event name="1">
<file name="example.file" aclId="" broken="0" ctime="2021-10-28 12:54:46" dir="233353357" entryId="306974419" expiretime="" gowner="alienci" guid="0f896750-37ee-11ec-8f15-024246e5e01d" guidtime="" jobid="" lfn="/alice/cern.ch/user/a/alienci/example.file" md5="3f8a7f1fa8fcfe1faeae60b6036de9de" owner="alienci" perm="400" replicated="0" size="40" turl="alien:///alice/cern.ch/user/a/alienci/example.file" type="f" />
</event>
<info command="example.file" creator="alienci" date="Wed Mar 22 17:02:54 UTC 2023" timestamp="1679504574533" />
</collection>
</alien>
cat#
whereis#
rm#
mv#
touch#
type#
lfn2guid#
guid2lfn#
guidinfo#
access#
usage: access [options] <read|write> <lfn> [<specs>]
-s : for write requests, size of the file to be uploaded, when known
-m : for write requests, MD5 checksum of the file to be uploaded, when known
-j : for write requests, the job ID that created these files, when applicable
-f : for read requests, filter the SEs based on the given specs list
-u : for read requests, print http(s) URLs where available, and the envelopes in urlencoded format
commit#
chown#
chmod#
deleteMirror#
md5sum#
mirror#
mirror Copies/moves a file to one or more other SEs
Usage:
mirror [-g] [-try <number>] [-r SE] [-S [se[,se2[,!se3[,qos:count]]]]] <lfn> [<SE>]
-g: Use the lfn as a guid
-S: specifies the destination SEs/tags to be used
-r: remove this source replica after a successful transfer (a `move` operation)
-try <attempts> Specifies the number of attempts to try and mirror the file (default 5)
grep#
changeDiff#
listFilesFromCollection#
packages#
listCEs#
jobListMatch#
listpartitions#
setCEstatus#
submit#
ps#
usage: ps [-options]
options:
-F l | -Fl | -L : long output format
-f <flags|status> : any number of (long or short) upper case job states, or 'a' for all, 'r' for running states, 'f' for failed, 'd' for done, 's' for queued
-u <userlist>
-s <sitelist>
-n <nodelist>
-m <masterjoblist>
-o <sortkey>
-j <jobidlist>
-l <query-limit>
-M : show only masterjobs
-X : active jobs in extended format
-A : select all owned jobs of you
-W : select all jobs which are waiting for execution of you
-E : select all jobs which are in error state of you
-a : select jobs of all users
-b : do only black-white output
-jdl <jobid> : display the job jdl
-trace <jobid> <tag>* : display the job trace information
-id : only list the matching job IDs, for batch processing (implies -b)
masterjob#
usage: masterjob <jobIDs> [-options]
options:
-status <status> : display only the subjobs with that status
-id <id> : display only the subjobs with that id
-site <id> : display only the subjobs on that site
-printid : print also the id of all the subjobs
-printsite : split the number of jobs according to the execution site
kill#
w#
uptime#
resubmit#
top#
registerOutput#
df#
du#
fquota#
jquota#
listSEs#
listSEDistance#
listSEDistance: Returns the closest working SE for a particular site. Usage
options:
-site : site to base the results to, instead of using the default mapping of this client to a site
-read : use the read metrics, optionally with an LFN for which to sort the replicas. Default is to print the write metrics.
-qos : restrict the returned SEs to this particular tag
setSite#
testSE#
Test the functional status of Grid storage elements
Usage: testSE [options] <some SE names, numbers or @tags>
-v : verbose error messages even when the operation is expected to fail
-c : show full command line for each test
-t : time each operation
-a : test all SEs (obviously a very long operation)
listTransfer#
uuid#
stat#
jsh: [alice] > stat example.file
File: /alice/cern.ch/user/a/alienci/example.file
Type: f
Owner: alienci:alienci
Permissions: 400
Last change: 2021-10-28 12:54:46.0 (1635425686000)
Size: 40 (40 B)
MD5: 3f8a7f1fa8fcfe1faeae60b6036de9de
GUID: 0f896750-37ee-11ec-8f15-024246e5e01d
GUID created on Thu Oct 28 12:53:34 UTC 2021 (1635425614405) by 02:42:46:e5:e0:1d
xrdstat#
usage: xrdstat [-d [-i]] [-v] [-p PID,PID,...] [-s SE1,SE2,...] [-c] <filename1> [<or UUID>] ...
options:
-d : Check by physically downloading each replica and checking its content. Without this a stat (metadata) check is done only.
-i : When downloading each replica, ignore `stat` calls and directly try to fetch the content.
-s : Comma-separated list of SE names to restrict the checking to. Default is to check all replicas.
-c : Print the full command line in case of errors.
-v : More details on the status.
-p : Comma-separated list of job IDs to check the input data of
-o : Only show the online status (for files with tape replicas in particular)
-O : Request the file to be brought online
-4 : Force IPv4 usage on all commands
-6 : Force IPv6 usage on all commands
jsh: [alice] > xrdstat example.file
Checking the replicas of /alice/cern.ch/user/a/alienci/example.file
ALICE::CERN::EOS username://eosalice.cern.ch:1094//02/03037/0f896750-37ee-11ec-8f15-024246e5e01d [32mOK[0m
ALICE::FZK::SE username://alice-disk-se.gridka.de:1094//02/03037/0f896750-37ee-11ec-8f15-024246e5e01d [32mOK[0m
resyncLDAP#
optimiserLogs#
showTagValue#
time#
timing#
commandlist#
motd#
ping#
jsh: [alice] > ping 5
Sending 3 messages with a pause of 1000 ms between them
reply from 137.138.99.147 (alice-jcentral.cern.ch / aliendb10.cern.ch): time=108.1 ms
reply from 137.138.99.147 (alice-jcentral.cern.ch / aliendb10.cern.ch): time=1.345 ms
reply from 137.138.99.147 (alice-jcentral.cern.ch / aliendb10.cern.ch): time=242.3 ms
3 packets transmitted, time 2354 ms
rtt min/avg/max/mdev = 1.345/117.2/242.3/98.57 ms
Central service endpoint information:
hostname : aliendb10.cern.ch
version#
alien.py version: 1.4.6
alien.py version date: 20230207_140652
alien.py version hash: 3ec8764
alien.py location: /persistent/sw/slc7_x86-64/xjalienfs/master-local48/lib/python/site-packages/alienpy/alien.py
script location: /persistent/sw/slc7_x86-64/xjalienfs/master-local48/bin/alien.py
Interpreter: /persistent/sw/slc7_x86-64/Python/v3.9.12-local6/bin/python3.9
Python version: 3.9.12 (main, Mar 17 2023, 13:53:57)
[GCC 7.3.0]
XRootD version: 5.5.3
XRootD path: /builds/jalien/jalien-ci/sw/slc7_x86-64/XRootD/v5.5.3-local3/lib/python/site-packages/XRootD/client/__init__.py
whoami#
user#
whois#
groups#
token#
usage: token [-options]
options:
-u <username> : switch to another role of yours
-v <validity (days)> : default depends on token type
-t <tokentype> : can be one of: job, jobagent, host, user (default)
-jobid <job DN extension> : expected to be present in a job token
-hostname <FQDN> : required for a host certificate
Print only command!!! Use >token-init< for token (re)generation, see below the arguments
usage: token [-options]
options:
-u <username> : switch to another role of yours
-v <validity (days)> : default depends on token type
-t <tokentype> : can be one of: job, jobagent, host, user (default)
-jobid <job DN extension> : expected to be present in a job token
-hostname <FQDN> : required for a host certificate
lfnexpiretime#
usage: lfnexpiretime [-options] [<file>]
options:
-r : removes the expire time set for an LFN
-a : add a new expire time for the given LFN
-e : extends the current expire time for the given LFN
-d <number> : specifies the number of days in the expire time
-w <number> : specifies the number of weeks in the expire time
-m <number> : specifies the number of months in the expire time
-y <number> : specifies the number of years in the expire time