A personal analysis note of the JLab E12-17-003 (Λnn search experiment).
Replay
Git server for analysis code
GitHub
https://github.com/tgdragon/HallA-Online-Tritium.git
Kyoto server (kgit_server)
Instruction → PDF
Download source files
You can use either one in the following options:
1. $ git clone https://github.com/tgdragon/HallA-Online-Tritium.git
2. $ git clone https://github.com/JeffersonLab/HallA-Online-Tritium.git
3. $ git clone kgit_server:/home/git/git/jlab/nnL_ana.git (see this)
The third one is recommended to run the replay in Kyoto machine
(Analysis PC in Kyoto).
Compile the codes
In the case of GitHub version:
-
$ cd replay/libralies/
$ ./libs.sh
-
$ cd replay/
$ analyzer
$ .L ReplayCore64.C++
(→ ReplayCore64_C.so will be generated)
In the case of kgit_server:
-
$ cd replay/libTriton/
$ make
-
$ cd replay/
$ analyzer
$ .L ReplayCore64.C++
(→ ReplayCore64_C.so will be generated)
Executing replay
$ cd replay/
$ ./fullReplay
If you use the source code from tgdragon (git clone from tgdragon),
new ROOT file will be stored in replay/Rootfiles/nnL/*.root by default.
Ole's code (Current production code)
The first version (ver. Nov 7, 2019)
1. Replay:
$ cd HallA-Online-Tritium/replay_ole0/
(or $ cd nnL_ana/replay/ in the case of kgit_server version)
$ cd libTriton/; make ; cd ../
$ analyzer
analyzer [0] .L ReplayCore64.C++
analyzer [1] .q
$ ./fullReplay
2. Analyzer ver1.7.0 may be required → e.g. in .bachrc of Kyoto's PC:
export ANALYZER = /sfw/analyzer-1.7.0
export PATH=$ANALYZER/bin:$PATH
export LD_LIBRARY_PATH=$ANALYZER/lib64:$LD_LIBRARY_PATH
To replay some runs
$ cd HallA-Online-tritium/replay_ole0/
$ ./gogogo.py (you can specify a data file in here; e.g. h2_replay.dat)
(After replay →)
$ ./gen_maruhadaka_dat.py
→ You can generate a data file that you need for Rootfiles/maruhadaka.cc (e.g. h2.dat)
Data size reducing
ROOT files to reconstruct a missing mass
0. Prepare original ROOT files in replay/Rootfiles/nnL/
1. $ cd replay/Rootfiles/
2. Prepare a data file in which you can specify run#, file#, and hypflag.
- h2.dat (H data for H kinematics)
- h22.dat (H data for T kinematics)
3. Compile
$ make
4. Execute maruhadaka with tosmall.py
$ emacs -nw tosmall.py ← please specify a data file you prepared.
$ ./tosmall.py
→ You will have a new ROOT file (such as h2.root in replay/Rootfiles/nnL/coin_dragon2/).
Kaon identification
Coincidence time
- Coincidence time is defined as a time difference between RHRS and LRHS trigger timings:
Tcoin = TR - TL.
- We arranged that the real e' timing is always later than the K+ timing.
Therefore, trigger timing for coincidence data is determined by the timing of e' for real coincidence events.
It means that you should be able to see the coincidence time by only looking at the time of RHRS.
- Corrections of t-zero, path length etc. can be applied to the coincidence time by "maruhadaka.cc" (tosmall.py @replay/Rootfiles/) → A variable name is ctime[100].
Aerogel Cherenkovs
- There are two aerogel Cherenkov counters.
One (A1) is for π+ rejection, and the other (A2) is for π and p rejections.
Refractive indices of A1 and A2 are 1.015 and 1.055, respectively.
- Variable names are R.a1.asum_c and R.a2.asum_c.
These are not ADC but are already the number of photoelectron (NPE) if you use a replay which is cloned from tgdragon.
Vertex z
Matrix
- The third order of matrix is used for each HRS:
(RHRS) replay/analyzer/matrices/zt_RHRS_opt.dat
(LHRS) replay/analyzer/matrices/zt_LHRS_opt.dat
Averaging between R and L
1. Event selection: abs(R.tr.vz[0]-L.tr.vz[0])< 0.03 (not optimized)
2. Averaging: (R.tr.vz[0]+L.tr.vz[0])/2.
Sample plot
- Test:
1. root /home/dragon/HallA-Online-Tritium/replay/analysis/mm/coin_H2_upto-111542.root
2. tree->Draw("ctime[0]","R.a1.asum_c>-0.1 && R.a1.asum_c< 2. && R.a2.asum_c>2. && R.a2.asum_c < 10. && abs(R.tr.vz[0])< 0.25 && abs(L.tr.vz[0])<0.25 && abs(R.tr.vz[0]-L.tr.vz[0]) < 0.03 && abs((R.tr.vz[0]+L.tr.vz[0])/2.)<0.08","")
- Exercise:
1. AC1 NPE vs. coin time
2. AC2 NPE vs. coin time
3. AC2 NPE vs. AC1 NPE
4. Vertex z vs. FP variables
5. coin time vs. FP variables
6. etc.
Cut Efficiency: PDF
- Fitting for vz:
$ cd analysis/mm/dummy/
$ root vzfit.cc
Matrix tuning
Vertex z
replay/analysis/zcalib
Raster x
replay/analysis/rastcalib
LHRS angle
$ cd replay/analysis/angcalib
$ make
$ emacs tunepar.dat:
e.g.
"tunepar.dat" (# of iteration + angle flag (1=x', others=y')):
5 1 (five iterations for x')
5 22 (five iterations for y')
$ ./angcalib.py
$ cd replay/analysis/angcalibR
$ make
$ emacs tunepar.dat:
e.g.
"tunepar.dat" (# of iteration + angle flag (1=x', others=y')):
5 1 (five iterations for x')
5 22 (five iterations for y')
$ ./angcalibR.py
Momenta for RHRS and LHRS
$ cd replay/analysis/mtune
$ make
$ ./mtune (# of iteration)
If you specify 0 for the number of iteration,
histograms of Λ and Σ will be shown without tuning.
Mixed event analysis
Sample code: PDF
$ cd analysis/mm/T2/
$ root mixed_event.cc
Beam charge on target
Charge calculation for each run
$ cd replay/Rootfiles/
$ make
→ A code "charge" will be generated from charge.cc
$ emacs -nw charge.py
← Please specify a target (e.g. dummy, T2, h2, h22, He3) in this code
$ ./charge.py
→ You will get beam charge on target in a data file (e.g. charge_He3.dat)
Integration to obtain beam charge for particular target
Once you generate a charge data in the above process (e.g. charge_He3.dat),
you can get integrated charge on the target as follows:
$ emacs -nw charge_int.cc
← Please specify data file (e.g. charge_He3.dat) in this code
$ root charge_int.cc
Momentum Loss Correction
Functions for momentum loss correction are different between nnΛ and 27ΛMg analyses:
- 3H(e,e'K+)nnΛ → Suzuki's study (2019)
- 27Al(e,e'K+)27ΛMg → Suzuki's study (2020)
(Updated date: Apr 30, 2020)
後神 利志
Toshiyuki Gogami, D. Sc.
京都大学大学院理学研究科
Graduate School of Science, Kyoto University
✉ gogami.toshiyuki.4a_at_kyoto-u.ac.jp