- Hello everybody. Welcome to this iBiology, and the National Research Mentoring Network Live Q and A. Today we're going to be discussing asking a scientific question, or how to develop a research question. We know that this is an essential first step in doing research. You have a well formulated research question that helps focus what you're going to do, and it really guides your experimental approach. Today we have a wonderful panel, and they are going to be offering practical strategies and advice on how do you develop, how do you go about developing a research question that's specific, meaningful, and feasible. They will discuss some fundamental characteristics of a good scientific question, and important points that you need to consider as you contemplate what you want to study, or if you're already started a project, how you want to move it forward. As I mentioned, this live Q and A is a collaboration between iBiology and the National Research Mentoring Network or NRMN, but this Live Q and A is also part of iBiology's Planning Your Scientific Journey course, so in case you don't know about the course, we just launched a Beta test for this course. Planning Your Scientific Journey is a free online course. The actual course will officially be launched in October of 2017, but the topic and the panelists that are participating in this Live Q and A are actually part of the course, and we're really really really happy to have them on board. I'm going to go ahead and introduce them. We have Angela DePace. She is an Associate Professor of Systems Biology at Harvard Medical School, and her group is broadly interested in the evolution of transcription networks in animals. We also have Ryan Hernandez, who is an Associate Professor in the Department of Bioengineering and Therapeutic Sciences at the University of California, San Francisco, where he studies patterns of genetic variation in modern day populations, to gain insights into their evolutionary history, and last but not least, we have Indira Raman, who is a professor in the Department of Neurobiology at Northwestern University, and her work is in the areas of ion channel biophysics, synaptic transmission, and cerebellar physiology. Thank you so much everybody for joining us. I want to get us started with a very basic question just to help us setup the discussion, and that is what are some qualities of a good scientific question, and I'll start with you, Angela. - Some qualities of a good scientific question. You can think about it both, from a scientific standpoint, and also from a personal standpoint. From a scientific standpoint, you want something that is related to a challenge in the field, or a puzzle, or a problem that's of a broad enough scope that there are multiple people who are going to be interested in it, but at the same time is also a piece of that puzzle that's tractable enough that you can make progress on it during the time that you have for your research project. If you are an undergraduate summer student, the scope of that is going to be very different than if you are a graduate student at the beginning of your graduate career, and it's going to be different again if you're a post doc whose planning on two to five years in the lab. It's really important to think, not only about the research area that you're interested in, but then also what piece of that puzzle is appropriate for you given your goals for your research experience. I think a good match between those things is the first part of what makes a good scientific question. The second part I think, that makes a good research question is whether or not it fits with your temperament, and way of doing science. Everybody likes different kinds of science. For me, I love visual methods. I love microscopy, because I love looking at tiny tiny things, and it makes me really happy to look at them, and so I think it's an important piece to see how you're motivated both by the sort of larger intellectual question, but also just the way that you're going to approach the problem. Taking into account, both the overall puzzle and interest to the field, your time constraints, and what's tractable, and what kind of model you're going to be looking at, and then also what sorts of things you're going to find sustaining, so that you can run the marathon race of doing a project. Those are the broadest pieces in my mind. - Ryan. - Great. Yeah, I think that's an excellent characterization. Balancing this broad question. You might be interested in, for example, what causes Parkinson's disease? That's a great question, but it's not a good scientific one. It has to be much more specific than that. You can ask about a particular risk factor. Say the G 2019 S mutation, and MARK2 is a particular protein that has a high susceptibility to, high penetrance for Parkinson's disease. You could ask how does that penetrance vary across populations? It's a very specific question. Something that's actually tractable that you can collect data on, and address. Importantly, I think that a good question is one where it's strategically designed such that either answer is going to be interesting and a valid contribution to the field, and being able to then find questions that are interesting enough and impactful enough and yet the answer doesn't actually matter, because it's going to be interesting either way. I think, is really the art in asking a scientific question, and that's not an easy thing to do by any means. It takes a lot of practice, and in fact, what I've found so far, is that that is the most challenging part of pursuing an advanced degree. One of the aspects of getting a PhD, is learning how to do things, but I think the hardest part is really learning how to ask questions. It's a skill that takes a long time to develop. I don't think anyone is necessarily an expert in finding answerable questions, or questions that are always testable, but I think that with practice, one gets good at identifying classes of questions that are interesting, and taking these broadly interesting topics, such as what causes, say, Parkinson's disease, and refining it down to something that is tractable as was just mentioned, and is yet still interesting. That's the area that I think is most pressing. - Great, Indira. - I was searching for my mute unmute button there. I agree very much with what Andrea and Ryan just said, and I'll just put a little bit of my own phrasing on the identical ideas. When I'm asked what are some qualities of a good scientific question, my immediate answer is that it's fractal. It addresses a big multifaceted issue that you want to know about, that doesn't have a yes no answer, which is what Ryan was saying. Not, are inhibitory neurons involved in disease? Yes or no, but something about how that's going to relate, and the big question has to break down into smaller questions, which break down into smaller questions, each of which encompasses a world of its own. This ties into what Ryan was saying. That there are many different elements composing the big question, and what Angela was saying about they have to be tractable, so as you break down these questions, any level of it, gives you some kind of an answer that has the potential to be interesting, and again, not being a yes no answer, sort of ties in with the idea that it isn't that you have to get one answer of answers. If the answer is yes, then oh it's exciting. If the answer is no, it's not interesting, but whatever comes out of it, ends up being informative, and helps you either break down into smaller questions, each of which may be important or build up to start answering the larger questions that you said you were interested in in the first place, and I'll stop there. - Wonderful. Thank you everybody. There's a few things that, collecting your responses, that seem to be really important for people to consider. First of all, you need to consider your personal versus scientific point of views. What are you interested in? It's really important to think about what stage you're in. Are you an undergrad? Are you a grad student? Are you an advanced graduate student? Are you a post doc? What stage of your training are you in? And that's going to help you determine, how much you can take in, in terms of the question that you're going to ask. Of course you have to think about your area of interest. Your skills, your temperament, your values. If I may add, do you like to sit in a dark room with a microscope all day? Is that something that you enjoy? Is that something that you don't? Do you want to be doing modeling? What is it that you're interested in? What do you enjoy doing? Something that I thought was really interesting, and that we're going to come back to, is that it takes a lot of practice. I mean I think learning how to ask interesting and testable scientific question is at the heart of research training, of graduate training. How do you get that practice, is something that we'll come back to, and then I think one strategy that you've all offered is that you can think about a research area that you're interested in, or where there are gaps in knowledge, and then starting to refine, what is the question that you're going to ask? Again, thinking about how you make it testable, tractable, and interesting. With that said, then how do you come up with a good scientific question? What are some general approaches that you can take? Ryan, you've already mentioned that it takes practice. What are some general approaches that you suggest students or post docs take, when they're trying to come up with a research question that they want to develop, and then if you could give some examples from your own experience, or from your labs, your research groups, that would be fantastic. I'm going to start with Ryan, since he talked about practice. - Yeah, I think the biggest element when you're learning how to ask questions is discussion time. Having face to face conversations with people that can help you think about this vague notion of an area of interest and coming up with a specific question that can be addressed, and I think it's really through discussions that you can start to narrow down on what is the interesting and important aspect of the question, and how best to make headway on that question. In terms of thinking of examples, the process is as unique as people. Everyone comes with a different background and will come with a different vantage point on how they do science, and how they learn to become an independent scientist, and so it's really hard to come up with generalized approaches, but I can give one example of a student, who has joined my lab a number of years ago, and he was broadly interested in studying evolution in real time. We run an evolutionary genetics lab. We do a lot of statistical modeling. Computational modeling of genetic data, and studying evolution in real time is an interesting concept. If you're in a PhD program you have a narrow time frame that you can do work in, and if you actually want to study something that happened over the course of a PhD, how do you do that? One thing that he latched on to, was this idea of using the immune system as a complex evolving population. The adaptive immune system. From this broad notion of studying evolution in real time to finding a case where evolution does happen really quickly over the course of one's lifetime, and in particular if you think about refining it even further, to how does the immune system respond to a vaccine for example. Then you can start to study the dynamics over just a period of days. For example, with an influenza vaccine, the antibody response is within about a week or so. You can start to study dynamics over very very short timescales, in a way that could be quite exciting. This was a fun process by which we went from this very broad topic of studying evolution in real time. All of a sudden narrowing it down to how does the immune system respond to a vaccine, and how do we collect data in order to address that question? I think that was definitely one of the more fun aspects of coming up with a scientific question that I've had. - Great, thank you. Indira. - How do you form a question? How do you come up with a question? You start with something you really want to know. You ask yourself, what is it that I really want to know? I have a whole file on my computer, that's called, What Do I Really Want to Know? When I find I can articulate something, like gee, I want to know that. I type it in there and I have this file. I'm a neuroscientist. I want to know how the brain transmits information. I want to know what is the language of neurons? How do patterns of action potentials make the code that signals everything? Those are big issues. Those aren't actually scientific questions. Those are things I really want to know. It's what I wonder about. Then you take a bit of information that catches your fancy, and makes you wonder more. In my own case, I learned in graduate school that Purkinje cells in the cerebellum which are inhibitory neurons, and I thought that that was pretty remarkable because I learned that information is transmitted by action potentials from cells that are excitatory. I thought, what's a signal and a cell that's inhibitory? What's a signal and a cell that tells everybody else to shut up? That made me start thinking, I need to know what do Purkinje cells actually do? What is their activity? How do they signal? Then I started making electrical recordings from cells to start to describe something. Things go from there, but I'm going to focus for a second on that word describe, because I think that today we sort of tumbled into this, I'm going to go ahead and use a big word, intellectually destructive dichotomization of calling science, either descriptive, which is pejorative, and mechanistic, which is supposed to be good, and the mechanistic people have captured the flag of the phrase, "hypothesis driven". Sounds fantastic. The people whose work has been dismissed as descriptive have countered with the term discovery driven which is ingenious, but it's also sort of hilarious. Then the non mechanistic, non descriptive people, have fought back by demanding physiological relevance, no matter what you do. A person trying to come up with an experimental question, is caught between all of these terms. I say, let's keep these words in mind, but let's just remember, that every question begins with observations. If you don't actually describe the phenomenon in question, you can't develop a hypothesis about how it works, and if you don't carry out some investigations, the test predictions to let you know you're on the right track, you can never relate your work back to bigger physiological or biological relevant questions. In my mind, as I've said, I'm interested in the language of the brain. I'm interested in language in general, and to me this all comes down to grammar. I really think like this. The description is the nouns. You need to know what you're working with and what the players are, and the mechanism is the verbs. How do all these nouns interact? What do they do? What phenomena emerge from their doing and their interacting? The physiological relevance comes from the meaning that results from the nouns and the verbs together. You can't have language without both. We stop it already with the hierarchy of experimental approach. When you go back to what I was saying before, you think about what am I really interested in? You ask yourself, what are the players, and how do they interact? From there, I think many of the specific questions start to reveal themselves. - Angela. - I want to offer a couple of procedural tools for how to get through refining a question. The process that we use in my lab actually takes quite some time, so I give people about, obviously not for people who are only in the lab for the summer, because this would take their whole summer, but I give people about three months to really finalize what their project idea is going to be and it's done in discussion, which lifts up what Ryan was saying, and Indira confirmed, which is that it's a really interactive process of refining. In the first month, I want people to read and talk broadly and come to me with three broad areas or puzzles or questions that they might be interested in, and these are things like, my lab broadly works on transcription. Some people are like, I'm super interested in the nucleus and where genes are in the nucleus and how that works, but I'm also really interested in how multiple pieces of DNA come together to affect gene expression or something like this. They're pretty broad, because I think that's okay, and you should know what place you're situated in, especially when you're reading about a new field. From those three, from the conversation, I can usually tell which one people are most excited about, and where I think there are opportunities to do stuff. I make a case for which one I think they might want to do, and see if we can come to an agreement about the area. The next month, their job is to come up with as many different hypotheses and or experiments that they can that address that general area, and then we meet again, and we talk through this laundry list of stuff, and usually again through conversation, I can help them refine, A, which one's they're most interested in, and which ones I think are most technically tractable, or are the most strategic in context of what the lab is doing. Things like that. Then in the last month, their job is to refine that idea into a written proposal, that has hypotheses and or goals. That's my way of talking about discovery driven or hypothesis driven, and specific experiments to address those. It results in a written proposal, whether you're submitting one or not, because it's a good starting place for your idea. There are a couple of other further tools that are really useful in this process. The first one is the thought experiment. When you have an idea about an experiment or an analysis, or a model that you might want to make, it's really useful for me, both to talk to other people about it, and also to think all the way through what might happen to go ahead and do a version of the experiment in your head and make a cartoon of the data that you will get, and decide what will this data tell me? If it looks like this, I will think this. If it looks like this, I will think that, and it's really important to force yourself, all the way through to the endpoint of an experiment, because it can help you distinguish these yes no questions from other kinds of questions, and it can also help you realize if there are demands on the measurements that you're going to make. If this is only interesting, if I can distinguish this line from this line, then I better think about what the error is that's going to let me distinguish this thing from this thing. I'm a big believer in the world of cartoons and pushing things all the way through. Sort of like all the way through to something that's almost like a fake paper. Where you're like figure one would look like this, and figure two might Looks like this, and it will totally get destroyed while you do your project but it's a useful place to start. The second tool that I think is really really helpful when you're in this land of baby ideas that aren't very well formulated, which is a land everybody has to live in. You don't know everything when you get started. Is to black box methods and analyses that you either don't understand, or don't know if they exist or not. This is to say something like, if only I could measure X and Y, I would be able to determine, this thing that I'm really interested in. I don't know if there's a way to measure X, and I don't know if there's a way to measure Y, but this allows you to shop your idea people, and they might be able to tell you, sure you can measure X doing this, or you can measure Y doing this, or that's super interesting. Nobody knows how to do that. Maybe that's a project in and of itself. Is figuring out how to measure X or how to measure Y. That kind of black boxing of pieces that you don't know all the way yet, can be really helpful in going from the very abstract research area that you're interested in, to the much more specific question later, and I can't overemphasize how useful it is as a tool to talk to other people about what you might want to do, because you started with the biggest plan. If I could tie these two things together, I would be able to learn this, which is motivating. But I need help on the technical details. You're telling people where to help you dig in. There's been so much good advice. I was taking notes on other people's good advice here too, but those are the strategies that we use in my lab. - That's wonderful. Maybe you can mute your microphone. Thank you, Angela. Here I am. Just to summarize. Face to face conversations, discussions, are key in this process. You need to talk to your advisor, people in your lab, other mentors. The process of coming up with a good scientific question, is there's a whole lot of thinking and self reflection and reading, but there's also a lot of interacting with people to help you refine your ideas. I think something that both Indira and Angela mentioned that is really really important, is what do you want to know? What are the black boxes both in the field in general, but then when you're honing in to the question that you want to ask, are there things that when you're doing the thought experiment, are there techniques that have not been developed? What would you need to be actually able to ask and answer this particular scientific question. What we've just discussed is actually a great segue into a question that we've gotten from our viewers, and it's from, oh, Brad Pitt is watching us, and his question is, after you identify a research question or a scientific question that you want to ask, how do you proceed to identify an experiment? You have a scientific question that you want to ask, and now you're transitioning and actually answering it. What are some approached to come up with experiments that can help you do that? You can talk about specific examples, but also broad strategies that students and post docs can use to come up with experiments to answer the question that they want. I'll start with you, Indira. - That's a great question. I think that once you have the scientific question you want to answer, and you want to figure out your approaches, it helps, and I'm going to echo a lot of things that Angela said here, you ask what would it take for me to have an answer? What would it take for me to feel like I knew the answer to that question? Remember that a hypothesis is a statement of how something might work. Not, this plays a role in that. Sodium channels play a role in neuronal signaling. That's not a hypothesis. A hypothesis is more like, again, back in my own field here, Purkinje cells fire action potentials rapidly by means of specialized sodium channels that recover rapidly from inactivation. Doesn't matter whether you follow that exactly, but it says this phenomenon comes from this behavior. This interaction, this activity, this protein doing this or that. Among the fun ways to talk about the science is if that's true, then if I did this experiment I would see this other thing. If it worked that way, then I should be able to see that recovery from inactivation is really fast. If I answered my question of how do Purkinje cells fire correctly, I would be able to see that the sodium channels recover rapidly from inactivation. That's an electrophysiological experiment. Now I know, can I measure this electrophysiologically? This echoes back to Angela's points. I can. I know I can do that experiment. I can say, if it works like that, that Purkinje cells rely on specialized sodium channels, then if I were to disrupt the specialization factor, then it should change the firing patterns. That means I need to knock out the specialization factor. That's a genetic experiment, but it means that we need to know what is the specialization factor, and that's a molecular question. Maybe I need a screen to find the associated factors. Can I do that? That's when I go and I ask people as Angela said, is there a way? Is there some screen? I'm an electrophysiologist. I don't know about all these molecular tools. How do I find out? In other words, by stating what it would take to give you a full answer, you can start to figure out the approach, but I think it's really important to also emphasize that in doing the science, you have this space where you have to be really precise. You need to know exactly what you're going to do today, but you also have to have a space, where you're willing to be uncertain. You have to be able to inhabit the world of uncertainty. Sometimes you do an experiment that's a pilot experiment. You do an experiment. This is just beyond Angela's thought experiment. You're like I'm just going to try it and see whether I can measure recovery for inactivation, and then analyze your preliminary data. Analyze even the lame first experiment you do. Make the whole plot. Plot the figure, and then you're like, oh, actually I should have measured that at two different voltages, or I should have measured that under two different conditions, or oh this actually looks quite promising. It helps you refine the experiments by even analyzing the very beginnings of data. Again, your initial figures are going to get completely overturned. You're probably going to analyze the same data over and over again in different ways, but it helps you proceed. Don't think it all has to be done in your head. Do the stuff in your head like Angela said. I'm not contradicting her, but it doesn't all have to be seen perfectly in your mind before you actually take action and do the experiment. - Wonderful. Angela, I'll have you go next. - Yeah, this is really hard. The question was how do you go from a general research area, or thing that you're interested in, to specific experiments, and I would argue that the kinds of experiments that you can do that are available for any particular question, break down into three categories. The first thing you have to understand is what context are you studying this thing in? You could be studying it in multiple contexts, but they're basically like, are you in an organism? Are you in cells? Are you in extract? Are you in vitro? Are you in a computer? Where do you live on that spectrum? Then based on that, you need to know what measurements of the phenomenon that I'm interested in, can I make? These are the proxies for the biological process that you want to study. What are the proxies for the biological process that you want to study? These can be things like imaging. The presence or absence or location or level of some protein or nucleic acid. Is it sequencing data? Is it how do people look at this thing? Then the third thing is, how do I mess with it? What are the perturbations that are available in this system? Can you genetically knock things out? Can you knock things in? Can you upregulate them? Can you downregulate them? It is amenable to small molecules? Do you deplete stuff? How do you? Can you tune it up and down, or can you only get rid of it or keep it? Or you run voltage through it. You can do all kinds of crazy stuff. But the goal is, there's an infinite world of experiments, but they're kind of hierarchically organized. In the sense that the kinds of things that you can do in different systems, different contexts, the kinds of things that you can do to a cell in a dish, are different than the kinds of things that you can do to a whole organism, and are different from the kinds of things that you can do with a computational analysis. Having a really good sense of just to reiterate, the sort of proxies, for the biological process that you're interested in, and the perturbations that you can make to it, are the first piece to understanding what kind of experiment you can put together, and it maybe that when you start digging into the field you realize that there's a measurement that is possible that people haven't made yet in this system and either it's not well characterized, just at its baseline level, in which case there's maybe some phenomenology for you to describe, or it may be that that information is very well established, but there are a bunch of different mechanisms that people are interested in, in which case you need to start poking at the system and seeing what it really does. I think that trying to break things down in that way can be a helpful place of moving from the intellectual to the technical. - Let's see, Ryan. - I'm a little different since I'm a computational biologist, and I think more like a statistician, so I approach things quite differently. My emphasis is always on, in coming up with a question, and I have this notion of hypotheses from more of a statistical sense, of a testable hypothesis that's consistent with everybody thinks of as a hypothesis, as Indira defined and described for us. That was great. When we're thinking about doing statistical comparisons, you want to think about, what is the null hypothesis? What is an alternative hypothesis? What is the thing that is the baseline expectations, and what are the possible alternatives? Sometimes you'll have one alternative. Either this happens or it doesn't. It happens to a certain level, or it doesn't happen at that level at all. Those types of questions which can be addressed. I think that the critical aspect here is in trying to write down what these competing alternative hypotheses might be, about this particular question of interest. You're interested in one of these questions, that could start off vague, and start to get more and more refined. You end up with a particular question that's specific enough and it's interesting, and now you want to start doing something. You want to do some analysis. You're going to have to come up with some hypotheses. Some alternative hypotheses that can explain the particular phenomenon that you're interested in. The next step for me, and for people in the more computational areas, is trying to identify data. Sometimes you're in an experimental lab. You can think about what experiments you would do to generate your own data, but in this digital age, there is an abundance of data that's already out there. We have different types of immune data. We have genetic data. We have proteomic data. We have data across different types of chemical libraries. We have countless types of data that's out there and already available that can be analyzed. Many of which are actually amenable to answering questions you might have, that people that generated didn't even think about. That's really exciting. When I go back to this one question of thinking about how the antibody repertoire responds to an influenza vaccine, this data set was already out there. People had published on this very question, but using very different approaches, and very different lines of questioning than we were interested in. We were able to adopt this data set from the authors and from the web, and start to dive in and ask our questions from that existing data set. Being able to identify data that already exist, I think is incredibly helpful, and sometimes can speed up this process of coming up with experiments as well, if there happens to be datasets that are out there that you can access quickly. I think that's where I think, emphasis in my lab is always on trying to find the right types of data, and then characterizing what are the problems with it? This is always going to be the case. How you understand the caveats of the data that you're analyzing. I think this is true in general, but particularly true when you're going to use publicly available data. One always has to consider what the caveats are and how those caveats with a particular data set are going to impact the understanding of the hypotheses that you set out to test. - Great. Coming up with a good scientific question and figuring out what experiments to do is hard. That's my takeaway from all you've said, but it's again, it's an iterative process. You can go. I liked what you said, Angela. It's hierarchical like, you can go from doing a thought experiment and then what Indira mentioned, of doing pilot experiments. See what the data is, and then you continue to again, to refine the questions. The smaller questions within the bigger question that you're looking to answer. Always keeping in mind, what would it take to get a full answer to this? What is the context in which you're doing this? What is the system? What are the techniques, and perturbations, that you can use to get a full answer to the question? What are your alternative hypotheses? Then I think something that's really really important that you just mentioned, Ryan, is what's out there, and what are the caveats? I think we could think about that from a more statistical standpoint, but also what are the caveats of the systems that people are using to answer, or have used to answer similar questions? What are the techniques, and so on? I want to switch gears. Angela wants to say something. - I just wanted to lift up the idea of the null hypothesis. The null hypothesis is not only important in statistical analyses. It's really really important in experimental biology too and we don't do it often enough, but having a really rigorous idea of what the null hypothesis means is what you expect to happen if you have found nothing new. It's really really useful as a starting point, because there have been entire projects in my lab that have been born of not being able to craft a reasonable null hypothesis. Where you realize that the field has not thought about it rigorously enough that you could actually write down a proper null hypothesis and you realize that there's a whole chunk of research that needs to be done in order to just do that part. Anyway, I was clapping to myself when Ryan brought it up, and I just wanted to make sure that people knew it's not just for computational biologists. It's for everybody. - Give the null hypothesis some love. Thank you for that. I know we have graduate students and post docs watching, and I want to combine actually a few questions that have been asked from the audience. What are the different approaches that grad students and post docs can take when coming up with a scientific question? Obviously, as a graduate student, and as a post doc, particularly when you're starting, you're at very different stages of training, and one could argue you're going to graduate school to learn how to ask scientific questions. Whereas, as a post doc, you're expected to know how to do that, although people don't always get the training they need for that, but that's for another conversation. What are some differences in your graduate student, versus if you're a post doc, how do you come up with a scientific question differently or similarly? I'll start with you, Ryan. - I don't really see a difference. I think that the approach is the same for myself, for post docs, for my graduate students, for everybody. The rate at which you pass through the different steps may be quicker, but I think the basic approach is the same, and it holds all the way through, all the way up to even senior faculty level at my institution. Have these conversations with other faculty sometimes, to try and say I saw this really interesting thing. I'd love to explore it, so how do we think about exploring this topic? I just had this crazy thought. What would happen if we did X or Y, and then a scientific question is born, and that leads down the path of a new research project. I don't like to think of there being this dichotomous. At this stage, you'll do X, Y, and Z, but at this stage you're going to do W, T, and S. I don't think that there's any difference actually. I try to maintain some level of parity at least in my group with how we approach projects. I don't see a difference in the level of rigor, or the sophistication of the questions. The interests of course change from person to person and the speed at which people come up with a question and the speed at which they pass through the initial investigations to convince themselves that it's actually a relevant question to address, is of course very different. But I think that the underlying process is actually the same for everybody. - Angela. - I would totally agree with that. I think that the process of thinking is exactly the same. I would say the main difference is in how you would prioritize different kinds of things that come up. For example, if you were a graduate student, you might prioritize a project that takes more advantage of the established techniques or analyses, in the lab early on in your career, so that you could learn the ropes, and get familiar with both the people and things that are happening in the lab, and the ideas that are happening in the lab. You may choose to do that as a post doc as well, but it might take less time. You might as a graduate student, later on in your career, feel bold enough to undertake something that's kind of risky, like a new technique development or something that's really open ended so that nobody's quite sure what it's going to look like, and that may or may not pay off. You might do some calculation about when you want to do that, but as a post doc, part of your goal too is depending on what your career goals are, you need to think about, if you're thinking about launching an independent research program, what's going to help you carve out a new area that will let you build a research program of your own? You might prioritize which kinds of projects you would take on a little differently, or if you're aiming to go into a predominantly teaching role, maybe you want to make sure that you're establishing an experimental research program or computational research program that's appropriate for that kind of environment. I actually don't think. I think, exactly as Ryan says, I think the thinking process is exactly the same, but the thinking process is always going to yield more stuff than you could actually do, and the difference is going to be in how you prioritize which things you choose to spend your time on based on where you are at a different stage. - Let's see, Indira. You're muted. There you are. You're muted again. There you go. - I don't think so. - Yes, yeah, we can hear you. No. - Can you hear me now? - Yes. - Okay sorry. I'm clicking on, over and over again. Now I have a marvelous echo on my own voice. Let's see what I can talk anyway. I will echo quite literally what Ryan and Angela said. That the overall shape of doing the science is exactly the same. I think it's actually really important to say that explicitly to post docs because I think it's really important for post docs not to feel like there's something more exalted scientific wise than a graduate student or for any of us, PIs also. We're all students of the natural world, and we're approaching the natural world in that same way, and I think it's by imagining that any of us have gone beyond that student level. Where first you observe and describe and then design your experiment in the way we've been talking. By imagining that you're past that some way, we lose the humility that's really necessary for doing experiments, and as has already been said, you might refine more quickly, or notice more readily, but the process is exactly the same. You identify a broad question with addressable sub questions. You describe adequately to define a phenomenon of interest. You had that give rise to testable hypotheses. Ones with predictions that let you rule out alternatives and this goes back to the null hypothesis stuff that's been discussed, and you're in day to day conversation with your data all the time whatever stage you're in. You're making adjustments. You're refining in your mind whether the trajectory you're on is the right one. I think it's also important to stress that if you've chosen a good post doc and post doc project, you're learning new subject matter and new approaches and new ways of thinking, and so you are resetting in many ways and that brings you back. It brings you back to this business of being at the beginning again, and that's okay. You're trying to make yourself unique in all the intellectual world, so you want to do something that contrasts with what you did as a graduate student, so that you become the intersection of two things that nobody else has really been the intersection of before. I think that really is the essence of what I see as really different between being a post doc and a grad student. - Great. Let me click myself. I have to say I was a little surprised that you all agree on, the process is the same, because I went to grad school but I left research, so in my mind there were some differences. So it's been really interesting to see the basic approach is the same. You still need to have a lot of conversations. Have a lot of discussion, and I really appreciate your intention of saying there is parity. You see them as equal. Post doc may go through the process a little faster, but I think what really struck a chord with me, is the importance of prioritizing, and that maybe the more senior you are either as a graduate student or as a post doc, or as a trainee in general, you may be willing to take more risk, and obviously as a post doc, depending on what you want to do, you may also need to be thinking about what is a new area that you want to carve out? Coming up with a question that is going to allow you to establish an independent program of research. It's something that is really important when thinking what is the scientific question that I want to ask? We have about eight minutes left in our conversation. I want to thank everybody who is watching for all the questions, and I'm going to apologize in advance because we're not going to be able to get through all of them, but I wanted my last question to be about risk. We have Ana Patel. I'm going to probably butcher your name. I'm sorry. Their question is, can you offer advice about balancing the risk versus excitement slash novelty of more difficult projects, and whether to dive into one specific project or question versus drawing out multiple things before committing. When you're thinking about a scientific question or a research question, how do you balance risk and coming up with something. Trying to go to something that's really novel versus something that maybe it's more established and safer. How do you balance that? I'll go with Indira first. - Did I get that right this time? I'm going to be a little glib in my answer, and I'm going to just say stop believing in balance. Balance or equilibrium is a state where nothing gets done. It's static. You can't have balance and actually move forward. I'm exaggerating here. I understand that there are subtleties that I'm ignoring, but I think it's really important to know that there are times when you're risk taking and you know it. The whole thing might be a bust, but we're going to give it X months. Some number of months. I think it really helps to think about time in some ways and think about how much time you're willing to give to a risk, and you evaluate that time, not like oh it's always seven months, but based on what you're doing, you can make some judgment. In my mind it really does, usually hang around seven months. I can stand about seven months of something really risky that's not actually working, before I think we need to do something different. When I can say I can stand, it's not just me, I can stand as the PI, looking at my trainee, we can recover from seven months where nothing happens, more than that, it's going to be a little more tricky. In that time you go whole hog, and see whether that risky thing works. The seven months is not seven blank months, like we're only going to talk seven months into it. As has been said repeatedly, you're in constant dialogue with your data and with the people who are relevant to help you get along with this data and you can adjust as you go along. We have small steps along the way to see is this yielding, is this yielding, is this going to yield, and so forth. If not we have some back burner idea that might be more safe, might be more certainty of results and if this risky thing doesn't work, we have something in the back. Sometimes it goes the other way around. This may depend on stage of training. The stage of the research question in your laboratory. Where you do that safe-ish thing first, and then you buy yourself time, and then you can do the really risky thing. As a scientist you're going to need to do both and you've gotta acknowledge you're going to lose some sometimes, but you're also going to gain some. I say just decide this is going to be a risky period, of this is going to be a safe period, and go with that. I'll be quiet because I know that other people have more to add. - Angela. - It's really hard not to be glib about this because it's one of those things. The glib thing I will say, but that I totally think is true is that often even the safest things turn out to be risky. You set out to do some experiment. Seems totally straight up. Everyone agrees with you that it's going to be totally straight up, and then you do it and it's weird. And there you are. You've found something new, but it turns out that it was risky, and now you're sitting in the land of ambiguity, that you might have been in had you done something weird to start out with. This is the fun and emotional challenge of doing science which is that it's unpredictable and there are things that will be widely acknowledged to be technically or conceptually challenging to work on at the beginning, and then there are things that will turn out to be technically or conceptually challenging to work on even though nobody thought that they were going to be hard. Both of those things can be true. That said, how to think about how to manage your project in terms of getting to where you need to go, given that things are quite ambiguous, I think the thing, as Indira pointed out, this constant dialogue of where are you at, how close is this thing to yielding? May it not at all? Might it be good to put it down for awhile? Try something else, and maybe come back to it. All of these things are strategic decisions that other people in the lab and your PI can help you make and the biggest emotional trap in my experience for science is throwing good money after bad, and this is because you've put so much time and energy and effort into something, that it's often really hard to let it go, and really really hard to just be like it might not happen. I might save myself some grief, and also get to the part of science that I really love by switching gears for a little bit, or doing something different. Often times it can be worthwhile to ask people that question. Am I throwing good money after bad? Should I cut bait and try something else? I think if you ask any scientist whose been doing it for a long time, they will tell you that there are projects that they still don't understand why they didn't work but where they had to cut bait and let them go. I have a number of them that I am still obsessed with and promise that one day I will make work. I don't know why they don't, but you can't do everything all the time, so I think that this is healthy to recognize that everything is a spectrum of risk, and that even the safest projects can sometimes be in that zone where you have to check whether or not you should let them go for now, and to not be afraid of letting things go. There's an infinite amount of stuff to do. - Awesome. Ryan. - In the last 30 seconds or so, I totally agree with a lot of what was said. I'll take it another step further. I think the question of trying to categorize questions as being risky or not is similar to the question of is this a stupid question or not? I just don't see a real need for it. The risk is going to be dependent on how much preliminary data you've generated. The more data you generate, the less risky a project will get, or the more risky a project will get because it's not looking likely. Learning when to cut the line as Angela just said, I think is a super important component but I don't like this dichotomy at all of risky versus non risky, because it all depends on how much data you have, and how much persistence you have to address the question. - All right. Thank you so much for being here, for all of your wisdom. Before we close, I just want to offer some summarizing thoughts. We've done a lot of deep diving into the art of asking a scientific question, or developing a research question. I think some important takeaways are it's really important to think about your interests, and the context. Not just the interests in the field, but your own personal interests. Your skills, the context in which you are in, in which you are doing your experiments. Really really key to know, what is it that you want to know? With this question that you're trying to answer. Really really really important to talk to people, to read, to look at what data is out there, and what caveats go with the data that is out there. If I had to summarize our conversation, is that coming up with a scientific or a research question is an iterative process. You have to do things, talk to people and adjust, and you do that over and over and over again. Really important to listen to your data and to the people around you, to think about techniques, approaches, and perturbations that are going to allow you to get a full answer to your scientific question, and then based on the last question that we discussed, it's important to embrace risk taking. Obviously you have to be aware of your context, but with risk taking it's really important to know when to switch gears, cut bait, or when to let go of something. In science, I think it's all about risk taking, but sometimes it's really important to listen to what the data is telling you and not just cling to something, because you're like this was so interesting in my head! With that, we are done with our conversation. We've reached the end of our time together. Once again, thank you so much Angela, Indira, and Ryan for spending this hour with us, and for sharing all of your advice and insights. I hope this has been really really valuable. This conversation is going to be available on the iBiology website. You can also find it on the NRMN website, and I will remind you our course is going to be available in October of this year. It's Planning Your Scientific Journey, and for those who are interested in the course, you can check it out at bit.ly/planscientificjourney. Thank you so much for joining us, and we'll see you next time.