This kind of loop will use a lot of resources at once - one core and lots of mem per job/subject. it may be more fair to bundle subjects and have each job run several subjects serially. Right now I only have a workaround where I hardcode the number of subjects per job and have all subjects for one job in a row in subs.txt. I found that first PBS command is not necessary.
According to RC 4GB per core should not be exceeded in order to not crash a node. setting the SPM.stats.maxmem=2^33 allows for 8GB of RAM and the job may actually use more than 10GB on blanca. You want to then request 3-4 cores for your job to not use up the memory of other people.
#PBS -l nodes=1:ppn=1,walltime=24:00:00 # set the walltime, here 24hrs