Handbook for PhD Students
This PhD Handbook serves a dual purpose: it describes the research methodology of our group and gives general advice to students, and it sets out the standards and processes that all students in the group are expected to strive for.
Research methodology
PhD research means finding an important open problem and making significant progress towards solving the problem. Using the right methodology is key in this process, and this is one of the central learning outcomes of your PhD journey.
New students, having read many recent papers with exciting results, will often want to start right away with working on new methods to achieve similarly exciting results. There is typically a strong focus on method design and testing. What is often lost in the process is due consideration for the underlying problem that the method should address. Such work typically ends up with a vaguely defined problem and a method that lacks clear justification; it's not clear why the method is useful and what problem it actually solves. I call this approach "methods-driven tinkering".
In contrast, problem-driven research starts with identifying the actual problem before attempting to find new methods. This involves answers to the following questions:
- Problem: What specific technical problem does your research address?
- Motivation: Why is this problem important? (What can a solution to the problem enable us to do?)
- State of research: Why are current methods unable to solve the problem? (technical limitations, assumptions, etc)
- Contribution: How does your proposed method address the problem?
It is important to put significant thought into these questions, including writing it down formally, as this will frame your search for a method as well as how you evaluate it.
You can further maximise the impact of your work by exploring new problems which are significantly different from prior work. For example, if there is a widely-used but limiting assumption in the literature, your research could be the first to attempt to eliminate or relax this assumption. Exploring new problems can maximise your impact because others may follow your direction, in which case they are likely to cite your work. In contrast, it is relatively less exciting to work on iterative improvements, i.e. "doing more of the same", which may be easily overlooked (or ignored) by others in the field.
Systematic evaluation is a central element of a research methodology. Much of the research in our group is empirical, which means that claims and hypotheses are tested in practice.
An important question to consider is what evaluation tasks to use. The choice of evaluation tasks should be guided by your open problem, in that a solution to the problem is required to successfully complete the tasks. I call such tasks challenge tasks because they allow you to say "this task cannot be solved (well) with current methods due to the open problem". Using a non-contrived, ambitious challenge task gives your research a stronger justification and can force you to think outside the box to find novel ideas.
When designing experiments, try to have specific questions in mind that the experiments should answer. For example:
- How does your method compare to other methods? (e.g. rewards, accuracy, sample complexity, compute time)
- How does your method scale in certain dimensions? (e.g. number of agents, actions, states)
- What is the contribution/effect of different components in your method? (ablation study)
- How does your method perform if certain assumptions are violated?
If you observed a specific result and have a hypothesis to explain the result, you may want to design additional experiments to specifically test your hypothesis.
The use of baselines is crucial to set the bars for what constitutes good and bad performance. There are generally three kinds of baselines:
- Best-case/worst-case: Best/worst-case (or upper/lower) baselines are useful to show the margin of possible improvement. Best/worst can mean different things depending on your work. For example, a best-case may be an exact method with "unlimited" compute time, or with access to privileged information that would normally not be available. A worst-case may be a method that is limited or basic in some specific sense.
- Alterations/ablations: If your method consists of several interacting components, it is important to show the effect of each component. For example, these may be elements of your network architecture and input, loss functions, or other sub-functions in your method. Ablation studies take your full method and remove or alter certain elements to show the performance under these ablated/altered versions.
- State-of-the-art: If suitable, it can be a good idea to include some recent state-of-the-art algorithms for comparison. Ideally, each algorithm should use hyper-parameters that were optimised for that algorithm specifically (rather than using the same hyper-parameters for all algorithms), to maximise performance of each algorithm.
Importantly, try to think of sensible baselines early in your research rather than as an afterthought.
Once you have results from your experiments, you should use the right tools to understand the data. Use good visualisation, including second-order statistics (e.g. standard deviation/error). Statistical hypothesis tests should be used to establish whether observed differences are significant or due to variance. It is crucial to understand the meaning of statistical significance and how to apply such tests correctly. This guide on experimental design in RL research (and this guide on hypothesis testing) is a useful starting point.
My advice for finding open problems and challenge tasks:
Read: Start by reading papers in your area. In the process, try to identify limitations in existing algorithms, such as restrictive assumptions in the used models and algorithms. Assumptions in evaluation settings may also hint at limitations of the algorithms. Try to identify common limitations across algorithms to maximise the relevance and impact of your work. Survey papers often contain sections on open problems which can give useful inspiration (e.g. [1], [2], [3]). Once a technical problem has been identified, try to formulate challenge tasks which require a solution to the problem.
Play: Implement a collection of state-of-the-art and/or baseline algorithms in the field and test them on some recent benchmark tasks. By using these algorithms to solve difficult tasks, you will gain a deeper understanding of their inner workings and their limitations, some of which may not be apparent from their original papers. Based on this experience, try to identify important open problems and new challenge tasks, for example by modifying the tested benchmark tasks to highlight the open problem.
Talk: Seek out and use opportunities to talk about your research and the research of others, whether in informal chats or formal presentations (e.g. group meetings, institute seminars, conferences/workshops, etc). Talking about your work requires you to express your ideas in a clear and succinct way, and is an important channel to get feedback and suggestions from others. It often happens that PhD students bury themselves in their work, which carries a risk of missing out on useful ideas; talking with other people is one way to avoid that.
Integrity: Research integrity means several things. Always make it clear which parts are from you and which parts were done by other researchers by citing their work. When comparing your algorithm with others, aim for a fair comparison (for example, put equal effort into finding good hyper-parameters for each algorithm). Don't cherry-pick your results; if you persistently get some bad results, it may be that your work is not yet ready or you may have to narrow the scope to a particular subclass of problems and make your assumptions clear.
See also the University's page on research integrity and this online course for PGR students.
Reproducibility: Scientists often point out limited reproducibility in their research areas (e.g. [1], [2], [3], [4], [5]). There is even a ML Reproducibility Challenge. We are aiming to produce top-quality research, and part of that is complete reproducibility of our work and results. This means that your papers must include all details required to reproduce your results. If you don't have the space, put the details into an appendix and link to it from the main paper. Once your paper is published, upload your documented code (algorithms, environments, scripts to run experiments, etc.) and the results data from your experiments, as detailed in the steps here. Besides ensuring reproducibility, making your code available has additional benefits: (1) other researchers will more readily use your algorithms if the code is already there, thus increasing the impact of your work; (2) it protects you in that other researchers will use a correct implementation of your algorithm, rather than a bad/buggy implementation of their own.
Read:
Academic writing
A paper with great science can be difficult to get published if the writing is not good. This includes good text structure and flow, sound arguments, correct use of grammar, no typos or informal writing, etc. Practice is key to achieving a good standard. The IAD offers courses for PhD students, including courses about academic writing.
- Online course: Writing in the Sciences
Structure and narrative: A good guide is to follow the problem-driven research questions. The introduction section can give summary answers to all four questions, while the related work section can give more details on why current methods fall short of solving the problem and why your method is different. The technical sections can then give a precise formal description of the problem and proposed method.
It is important to carefully think about the narrative of your paper - that is, how you will lay out and support your claims and arguments. There are many ways in which this can be done. Two general types of narratives include what I often call 'vertical' and 'horizontal' narratives. A vertical narrative proposes a novel algorithm and focuses on establishing new state-of-the-art performance achieved by the algorithm in some way. I call this narrative vertical because I imagine one algorithm and many results and analyses for that algorithm stacked below it. In contrast, a horizontal narrative may propose several methods to address a new problem and focusses on comparisons between these methods. I call this narrative horizontal because the aim is to propose a broader spectrum of possible solution methods for the problem and understand the differences. Examples of horizontal narratives include this and this paper. In practice, a horizontal narrative can sometimes be easier to pass by reviewers because the focus is more on tradeoff analyses between the proposed algorithms rather than establishing state-of-the-art performance against prior work. However, both narrative types still rely on a strong methodology and useful insights.
There are some general principles which should be followed:
- To maintain good reading flow, each sentence should logically build up to the next sentence.
- A useful rule of thumb is that a reader should be able to summarise the main point of every paragraph in one sentence.
- Keep it as simple as possible, and as complex as necessary. For every sentence you write, ask yourself: do I need this sentence? Read: Simple rules for concise scientific writing
- Use watertight logic. Anticipate a reviewer who will probe every statement you make.
Pre-writing form: To help you prepare for your writing, about two months before the paper deadline, fill out and send me your pre-writing form. Then, send me your first complete draft of the paper four weeks before the deadline to leave enough time for feedback and improvements.
- Author list: list authors by contribution; supervisor usually goes last
- Abstract: the abstract is a 1-paragraph condensed summary of the introduction section; e.g. one sentence for each problem-driven research question, and a brief summary of the main results
- Introduction: follow problem-driven research questions; and don't waste your space with a paragraph at the end to explain the structure of the paper, this should be clear from the section titles
- Related work: the related work section comes either after the introduction or before the conclusion
- Preliminaries: be very careful with your problem definition; double-check your maths
- Math mode: use \arg,\max,\min,\log, \left( \right) for auto-sizing brackets (also works with [],{}); each equation that is shown in its own line should also show an equation number so you and readers can point to it
- References: make sure references are complete and correct, and use a consistent formatting
- Citing: don’t use citations as nouns or subjects (i.e. don’t write “In [5], an RL algorithm is used to...” or “[1] show that...”); instead use active citing (“Albrecht et al. [1] show that...”) or passive citing (“...it was shown [1]”)
- Naming your method or algorithm allows for more concise writing and makes referencing easier.
- Capitalise Section, Figure, Table, Algorithm (e.g. "see Section X", "as shown in Figure X"). In Latex, use tilde to avoid line breaking before the \ref command ("Section~\ref{xx}"), or use the \cref command.
- Use consistent terminology, don't switch between different terms to refer to the same thing
- Use Oxford commas: not “We tried a, b and c...”, instead “We tried a, b, and c...”
- Don't use "he/she" pronouns for agents; use "it"
- Only use "we" when referring to the authors; write "the agent/algorithm does X" when referring to agent/algorithm
- When writing “This” or “These”, always follow it with a noun; not “This shows that...”, instead “This experiment shows that...”
- Know when to use "if" vs "whether": "if" is a conditional statement (think if-then-else), "whether" is a choice between two alternatives
- Avoid informal writing: isn't -> is not, can't -> cannot, like -> such as, big -> large/significant
- Avoid imprecise descriptions, e.g. "performs better/worse than..."; be precise, e.g. "achieves higher average returns than..."
- Use British or American spelling, but not both; my preference is British
- The microtype Latex package makes your papers look better.
Most conferences and journals give you the option to respond to reviews for your paper — this is called "rebuttal". Your goal during the rebuttal phase is to try and convince the reviewers to increse their scores for your paper. So, for everything you write in the rebuttal, ask yourself: "Will this help to change their mind?"
The rebuttal should point out important errors and misrepresentations in reviews, and respond to specific questions raised by reviewers. Assume reviewers have very limited time and attention spans, so keep the rebuttal succinct and to the point. Above all, use a polite and professional writing style, even if the reviews are unfair or badly done. You should also know who will read your rebuttal: sometimes the rebuttal is addressed directly to reviewers who will consider your comments in a discussion, and sometimes the rebuttal is only visible to the Area Chair (and not the reviewers) who will consider your comments alongside the reviews.
Additional advice:
- Use explicit references to reviewers and comments, to make it easy to connect your responses to corresponding reviewer comments. For example, if a reviewer used numbered comments, then use the same numbers to refer to individual comments.
- When possible, link your comments back to your paper to show that the paper already addresses the point ("As detailed in Section X", "As shown in Figure Y", etc).
- If the space for rebuttals is very limited, focus on the most important/critical reviewer comments. Don't try to squeeze in everything.
Revising papers: Some conferences (and most journals) also give you the option to revise your paper in response to the reviews. Besides clarifications in text, this may include adding more experimental results (such as additional baselines and environments). A paper revision can be the most effective way to sway reviewers: if they can see how you improved the paper based on their suggestion/critique, it can lift their concerns and there is also the psychological aspect that they will feel taken seriously. To make it easier to find the revised parts in the paper, consider highlighting the changes in blue (the highlighting can be removed later in the final version if the paper is accepted).
See also: Surviving the review process, How we write rebuttals
Follow below steps after your paper has been submitted or accepted:
- Feedback: Incorporate any last feedback you received from your reviews.
- Code: Upload your code to our code repo.
- Make sure the code is clean and contains a basic documentation in the code files.
- Include a Readme file in your repo which points to your paper and provides basic instructions to users.
- Include the URL to your code in your paper (usually footnote on first page).
- Data: Upload your experiments data to our data repo.
- See instructions for uploading data.
- Include a Readme file which explains the data format.
- Optionally include code files to help read or visualise the data.
- Include the URL to your data either in your paper or in the associated code repo.
- arXiv: Once all authors are happy with the final paper, upload your paper to arXiv.
- Use the 'arXiv.org perpetual, non-exclusive license' license.
- Note that arXiv requires uploading your source files and this can be downloaded publicly by arXiv users. Make sure to remove any comments in your tex files. You can use this tool to clean tex files.
- If the paper was accepted at a conference/journal, enter the publication details in the "Comments" field (example).
- Use both primary and secondary Subject classifiers on arXiv.
- Important: Make sure to check each page in your paper PDF compiled by arXiv - there can be differences from your original paper which you may have to fix.
- Publications page: Prepare the pub snippet for inclusion in our publications page. See here for the basic format and some examples. Send your snippet to Stefano.
- Overleaf: If you used Overleaf to write your paper, save a copy of your complete Overleaf project on your local machine.
Attending conferences: First, think about what you want to get out of attending the conference: meet relevant people, learn about their work, tell them about your work, discuss new interesting ideas, start conversations which could lead to collaborations. To achieve these goals, it is best to plan ahead. Make a list of people you want to meet, read up about their work so you can relate to their work when talking to them, potentially contact them ahead of time to make yourself known and agree on a time to meet. If whoever you talk to remembers one key point about your work, then that is a good outcome. In contrast, the least effective approach is to be shy/passive and hope for others to come to you. This is not the time to be shy; attending a conference costs a lot of money, time and effort, and so you want to maximise the returns. Keep this in mind when planning your conference trip.
Presentation: Create your presentation slides using the group's slide templates (can be used with most slide software, including Google Slides): Slide template1, Slide template 2
- Rule of thumb: plan for 1 minute per content slide
- Speak clearly and not too quickly
- Make good use of visuals (images, videos) to explain ideas and results
- In results plots, it's a good idea to first show and explain the axes, then show the data in next step
- Do practice talks in group meetings to get feedback
Poster: We don't have a poster template but here are some examples (example 1, example 2). These were generated with Latex using the 'baposter' document class. You can also use Powerpoint/Keynote or similar software to make posters.
- Design your poster so you can deliver your complete pitch in 1-2 minutes
- Practice your pitch while designing your poster - imagine standing next to it and pointing with your finger
- Don't print text/images too small, it should be readable from 2 meters distance
- Include enough detail so that the poster also works alone - people may read your poster when you're not around
- Show your poster around in the group to get feedback
Science article with more advice: How to prepare a scientific poster
Below is a list of "vetted" target conferences for our research, ordered by usual submission deadline (please double-check deadlines as these can change each year).
For AI/ML research:
- International Joint Conference on Artificial Intelligence (IJCAI) -- late Jan
- European Conference on Multi-Agent Systems (EUMAS -- Feb)
- International Conference on Machine Learning (ICML) -- early Feb
- Conference on Reinforcement Learning and Decision Making (RLDM) -- late Feb
- Conference on Uncertainty in Artificial Intelligence (UAI) -- late Feb
- Conference on Neural Information Processing Systems (NIPS) -- May
- AAAI Conference on Artificial Intelligence (AAAI) -- Sep
- AAAI Conference on Innovative Applications of Artificial Intelligence (IAAI) -- Sep
- AAAI Spring/Fall Symposia -- website
- International Conference on Learning Representations (ICLR) -- Sep
- International Conference on Artificial Intelligence and Statistics (AISTATS) -- Oct
- International Conference on Autonomous Agents and Multiagent Systems (AAMAS) -- Nov
- European Conference on Artificial Intelligence (ECAI -- Nov)
- International Conference on Automated Planning and Scheduling (ICAPS -- Dec)
For AV/robotics research:
- Robotics: Science and Systems (RSS) -- end of Jan
- IEEE Intelligent Vehicles Symposium (IEEE-IV) -- end of Jan
- International Conference on Intelligent Robots and Systems (IROS) -- end of Feb
- Intelligent Transportation Systems Conference (ITSC) -- Feb/March
- Conference on Robot Learning (CoRL) -- July
- International Conference on Robotics and Automation (ICRA) -- mid-Sep
Workshops: Most conferences also run various workshops which can change from year to year. Workshop are much smaller in size and focus on specific sub-topics. Workshops typically use a lighter review process and so are easier to get in. They provide a good venue for early researchers to present and discuss their work, meet people, and get feedback. A workshop paper can be a good stepping stone toward a paper at the main conference. AAAI Symposia are somewhere between a workshop and conference.
Journals: There are several excellent journals for our work, including AIJ, JAIR, MLJ, JMLR, JAAMAS, T-RO. Journals are meant for more "mature" work and typically don't have a page limit to give more space to authors. It is often the case that a journal article combines and extends one or more published conference papers by the same authors, but it does not have to be the case. In AI/ML, journal publications are generally considered above conference publications.
Day-to-day work
Read: Code of Practice for Supervisors and Research Students
The School defines several milestones for PhD students. All students in the group should be aware of their milestone dates and plan accordingly. The timelines aren't set in stone, so there is flexibility if needed. The most important milestone in each year is the end-of-year report and review meeting, based on which it will be decided whether you will progress to the next year. Some guidance on results for annual PhD reviews can be found here.
The School and University provide a number of training courses for PhD students.
See the IGS pages for more general information.
You should adopt a particular approach to our individual meetings: use it as a resource, and come prepared to maximise the usefulness for you. Each meeting should follow this structure:
1) Recap: what was the state of your work at the last meeting, and what did we agree you would do for the next meeting?
2) Discuss your progress, problems, ideas
3) Propose what to do next, until the next meeting
Your supervisors want to give you helpful feedback. However, keep in mind that supervisors have many such meetings, and it can be difficult for them to keep all details in mind between meetings. So, to facilitate useful feedback, it is helpful to remind supervisors of the state of the work and current targets.
Preparing for the meeting means that you give some thought to 2) and 3) before the meeting. During the meeting, you should note down all work tasks we agreed on, which can be used for 1) at the next meeting.
If you want more detailed technical feedback on an idea, it is best if you can send me a write-up of the idea prior to the meeting. Writing it up is also immensely helpful to clarify your own perspective.
One of the most important functions I provide to you is feedback on your ideas, methods, results, writing, etc. Consider the following:
- If you want to discuss a technical idea at a meeting with me, it can be very helpful if you send me a write-up of your idea prior to our meeting so I can take the time and prepare. At least try to think of an organised presentation of your idea on the whiteboard for the meeting, so I can follow step by step.
- If I give you written feedback on a document you prepared, please take the feedback seriously. You should assume that when you send me your next draft, I will check whether and how you addressed my comments. Give your reasons if you decided not to address a more major comment.
- Feedback is a two-way conversation. You should actively ask questions rather than just passively receive feedback. If my feedback doesn't address a particular point that is important to you, then you should always feel free to ask for more feedback on specific points.
Important: If you send me an updated draft, please also send me the diff from the last draft. Use latexdiff.
It is a good idea to actively seek feedback from peers and experienced academics. Have an open attitude towards feedback rather than be defensive. This does not mean that you have to accept unfounded criticism or that you have to change your ideas and methods if you disagree with the feedback. However, think carefully about any feedback you get, and whether you may in fact want to change aspects in your work to address the feedback. This may mean extra work, but the resulting improvements can make the difference between an accepted and rejected paper.
Creating a timeline (Gantt chart) of your activities is a useful tool to keep track of your PhD work. Examples of activities include:
- Coursework (state course acronyms)
- Teaching (state course acronyms)
- Research (this could be further broken down, e.g. reading, implementation, evaluation, etc)
- Paper writing for conference X
- Internship at company Y
- Interruption (state reason)
Keeping track of your activities helps you reflect on how you use your time, and whether you may want to adjust your work plans/habits to make more efficient use of your time. One software option is GanttProject, but many other apps exist. You can use the software to export an image that you can share with me and use in your annual PhD review meetings.
Besides the annual review milestones required by the School, I use a combination of short-term and long-term criteria to assess your progress:
- After each individual meeting, we usually agree on a set of tasks to work on until the following meeting. When we meet again, you tell me about your progress on these tasks, giving me a an indication of your short-term progress.
- In the long-term, your PhD research objectives are an important indicator of your progress. How much closer are you now to achieving your research objectives than you were at the start of your PhD?
- Your PhD timeline charts also provide an indication of your progress. They show me the things you've been doing and whether you use your time effectively.
In addition to the above, I also monitor how you develop your soft skills: the quality of your writing and verbal communication, how you handle feedback and criticism, your ability to give research talks, how you work with others in the group, etc.
We have several support roles to help with various aspects of managing our research group. Roles and responsibilities are listed below.
Code & Data admin: (current: Lukas (code), Max (data))
- Manage our group code repo (e.g. access to Github repo, creating new code repos for people)
- Manage our group data repo (e.g. creating new data repos for people)
- Ensure code and data repos uphold our reproducibility & transparency policy (e.g. code files include documentation, code/data repos include Readme files with usage instructions)
Server admin: (current: Trevor, Sabrina)
- Provide general tech support for our servers via the #compute channel on Discord (e.g. server problems, software installation, Slurm config, etc.)
- Help with physical installation of servers in server rooms
- Contact School IT support for issues with "school-managed" servers (e.g. ipab1)
Group meeting admin: (current: )
- Organise our biweekly research group meetings
- Find people to present project updates (two people per meeting, each ~15-20min + 10min discussion)
- Book meeting room (room booker), ideally IF 1.15/1.16 and for 1.5 hours
- Add group meeting in our Teams group calendar (tag the 'Agents group > General' channel when creating the meeting and add room location)
- Announce meeting and speakers in the #general channel on Discord
- Frequency: biweekly (check day/time with me for each meeting), alternating with the RL reading group meetings
Social media admin: (current: Elle)
- Post messages on our Twitter and LinkedIn channels
- E.g. about new papers, blog posts, events we're organising or attending, news about MARL book, other news, etc. - ping me or group on Discord (#social-media channel) to get ideas
- Help to increase social media following
- Before posting, send me a draft of the message to approve
- After posting, send emails to InfComms and ECR to ask them to share the posts (limit this to the more important posts about our own things, like papers and book).
- Frequency: weekly
RL reading group: (current: Samuel, Kale-ab)
- Organise biweekly RL reading group meetings (see here)
- Announce meetings on the RL reading mailing list
- Add meetings on the RL reading calendar
- Manage the paper voting list
- Frequency: biweekly, alternating with research group meetings
Socials admin: (current: Balint, Max)
- Organise group social events via the #social channel on Discord
- Frequency: aim for at least one event per year-quarter
- Organise weekly group lunches
Resources
The group maintains a blog to promote our research and to generate quality content for search engines. Blog posts may cover various topics, including:
- Promoting a recently published paper
- Tutorials about interesting techniques
- Proposing a new challenge and early results
- Code and data releases
- Reports from conference events
Blog posts should be written in an engaging and accessible style, and with a broad audience in mind. Make good use of illustrative examples and visuals (graphics, videos). Writing blog posts is good practice for accessible writing, and provides an outlet for you to promote your work and ideas.
To write your blog post, download the template file and follow the steps in the README file. I recommend using a HTML editor such as Brackets to edit your files. Your web browser displays the file 'blog-template.htm' exactly as it will appear on the group homepage.
Members of the group have access to a number of computing facilities.
- University cluster: ECDF Eddie is a large cluster maintained centrally by the University. We have a group space under '/exports/csce/eddie/inf/groups/agents' which has 200.00 GB storage. Use the 'quota' command to check space usage. Please clean up your files when they are no longer needed. Consider using the alternative 'scratch' space if you temporarily need more space (see quick guide).
- School clusters: The School maintains several of its own clusters which you can find here.
- Group servers: The group maintains its own servers to which only group members have access. To get started, see the server readme (if you can't access this page, ask Lukas to be added to code repo). Servers are currently reserved for PhD students and postdocs in the group; MSc/UG students should first seek approval.
- JADE: We have access to the JADE cluster maintained by the Alan Turing Institute.
Best practices:
- Delete unused files: Be mindful of the storage space you use up. Delete files if they are no longer used, or download them to your local machine.
- Save space with type casting: Most programming languages provide operators to cast data into different types. You may be able to reduce your space usage considerably by casting double floats into single floats for storage, provided single floats give you sufficient accuracy. Similarly, consider casting 64 bit integers into 32 or 16 bit integers if their number ranges are sufficient.
- Job priorities: Most job scheduling systems use priority-based job queues. User priority can depend on different factors, such as how many jobs the user has run recently and how many resources (CPUs, GPUs, RAM, time, etc) are requested for the job. Plan your resource requirements carefully and request as little as necessary to increase your priority.
- Code profiling: For very time-consuming jobs, it can be a good idea to do some performance profiling of your code and consider whether there are bottlenecks that can be optimised in some suitable way.
Those of you who are in CDT-funded PhD programmes will have to take a few courses during their first year. Normal PhD programmes don't require courses, but students are still allowed to attend courses if they wish (but they don't count towards the degree). I don't expect you to take courses if it's not required for your degree, though it may be a good idea if you find that you need to fill some gaps in your knowledge relevant to your research. You can attend courses without doing the assessed courseworks and exams.
The School and University also provide a number of training courses for PhD students.