The need for more research

While continuing my research the further in depth more questions began to arise. Moving closer to more recent years, allowed for more sources overall however still presented many limits. Many of the cases are currently still be reviewed or are going through the legal process currently.  This was one of the challenges I faced while completing this research.

I began to contemplate what factors play a role in the shootings that perhaps aren’t as obvious or as apparent as race or gender. One factor that I began considering is mental health. I became curious as to what the mental state of both the officer who committed the shooting and the victim of the shooting was on the date of the shooting.

I was curious to how many of these victims had diagnosed or undiagnosed illnesses that led to their victimization at the hands of police? Perhaps if these illnesses were diagnosed or treated the victim would have never been in the situation to get shot or injured in the first place. On the other hand, how many police officers had diagnosed or undiagnosed illnesses that made them more prone to the discharging of their weapon? As a psychology minor having basic knowledge that mental health impairs decision making. I came to the realization that there is a need for research in this topic. Mental health is stigmatized in social media. It is a very delicate topic, therefore leaving a hole in research on this aspect. The mental health of the officer, who was involved in the shooting might present certain symptoms of this social issue. Associating mental health with law enforcement might further cause tensions between law enforcement and citizens.

One take away from this project, which I really enjoyed was being able to use my knowledge from Criminal Justice and Psychology to look at aspects a certain way with certain perspectives. It provided different types of insights and complimented my work.

Moving forward these questions and thoughts are ones that I will keep in mind when drawing overall conclusions of the research. I plan on continuing my work in this research because it is imperative to my field and also current to this time period.

Characterization of the Microbial Community of the Accessory Nidamental Gland of the Longfin inshore squid, Loligo pealei

The 16srRNA gene from cultured isolates from the squid accessory nidamental gland was amplified using PCR and sent out for sequencing. Results from the 8 initial isolates came back, allowing us to identify the different genera of bacteria associated with the squid. The amplified 16srRNA gene that was amplified was almost the entire length of the gene allowing for a reliable identification of each bacterial isolate. The two different primers (8F, 1514R) utilized caused replication from different sides of the DNA strand. From the two independent sequencing reactions, a contiguous DNA sequence was aligned with CodonCode Aligner. We performed a Standard Nucleotide BLAST (NCBI) in attempt to find closely related bacteria and other invertebrates that might be associated.

These basic molecular and microbiological techniques were applied to an unsuspected discovery of invasive Clinging Jellyfish, Gonionemus vertens,  that were discovered in July during research in the Long Island Sound. I have isolated 31 different bacterial colonies derived from a single cling jellyfish specimen. We have gone through the process of isolating several samples, amplifying the 16srRNA  gene as well as characterizing the basic bacterial morphology. The process of isolating these Clinging Jellyfish bacterial isolates revealed a number of colonies that had a green or orange iridescence. We are interested in identifying these novel bacteria and would like to discover the role of this iridescent bacteria with this jellyfish. After the 16srRNA gene is amplified, cleaned up and sequenced, they will be sent out for sequencing. From there, we will once again be able to further classify and explore the different type of bacteria that are associated with the Clinging jellyfish.

Blog #2

Since the last blog, our research project has developed and we have made many strides and have crossed many tasks off of our research checklist. Some of those tasks include reviewing in depth the current research on our topic and beginning a literature review, registering with IRB, creating an informed consent document for the human research subjects, and creating a flyer to attract nursing majors to participate in our study, and compiling a list of questions for our student interviews.

Some of our successes include frequent and open communication between Dr. Northrup and I and swiftly completing tasks to get our research to where we want it to be. Some difficulties include needing to wait for the school year to conduct our nursing student interviews and not having many current articles or research about our specific topic to use as a guide.

I have learned that the research process is always changing. I have learned to let the process guide the project instead of letting my original idea of the project control the process. The ending product may be very different from the original idea, but that is the beauty of research.

This research project has made a lasting impression on me. As someone who has always found research very intimidating, I have learned that by taking it step by step, it can be an enjoyable journey. For this upcoming year, Dr. Northrup and I will be conducting interviews on junior nursing students, compiling the data, and obtaining conclusions based on the interviews.

Blog Post 2

                    The desire to make a change has allowed me to make tremendous strides in both the research and the development of the application.  Our team worked on the development of a virtual reality application to assist Alzheimer’s patients and their caregivers, specifically during the sundowning period.  Our app, DiscoVeR takes the user into three distinct worlds, a jungle, a city, and a farm.  In each of the worlds, the user looks around to interact with the various objects and animals.  Simply glancing at the animal assigns the user a quest to find a certain food/drink that the animal is looking for which appears randomly around the world.  The user then finds the item and returns it to the animal.  My favorite part of the application was a late addition, a music room.  Instead of animals, the user interacts with various instruments and each glance plays a different clip of the respective instrument.  Also included in the music room is a jukebox that contains about 20 classic songs that the elders should remember from their childhood and adulthood.  Music therapy is a proven technique to help these patients, so including it in the application was a necessity. 

               Although a working application is currently completed, there are still many questions that need to be answered to further enhance the project.  First off, the application needs to be tested with more seniors and those suffering from Alzheimer’s.  What might make sense to us as the developers might not make sense to a senior citizen using the app, and that is something that has to be strongly taken into account.  Directions need to be clear so that the senior is able to use the app with minimal difficulty.  The potential additions to the application are endless, and our team of developers and researchers are constantly looking for ways to improve upon our project.  The music room seems to be a favorite among those who have tried the app, so enhancing the features of that would be ideal.  Customization of the songs in the jukebox along with the user adding their own pictures to a wall could make the experience different and personalized for each individual. 

               Working on the app thus far has been an awesome experience, and one that has only just begun.  It really has expanded my knowledge into a field where I never saw myself taking any interest.  It’s amazing to think how much of an impact this technology can truly have on these individuals with Alzheimer’s, and I am honored and ecstatic to be one of the many people working towards bettering the lives of all those involved!

 

The Progress, Successes, Challenges, and Reflections of Analyzing Athletes’ PR Scandals Link to Endorsements

At this point in our research we are still continuously gathering data needed to measure the financial effect created when an athlete with major endorsements endures a public relations scandal. Our focus has been on a top-down secondary research approach collecting quantitative data. At the beginning of the project we decided we were going to collect and analyze the data for a minimum of 35 athletes. The initial list of athletes compiled had 43 subjects. For each subject, we must determine who their biggest endorsement deal is with, as well as the date they acquired that endorsement. The data variables we are analyzing for each subject include the intensity of the athlete’s scandal, the company’s revenue and stock price (at the date of the scandal, one week after, one month after, and three months after), and the S&P 500 (at the date of hire, at the date of the scandal, and one month after). The intensity of the athlete’s scandal represents how harsh or vivid the scandal is to the public. What is the difference in economic shock created by an athlete with abundant endorsements who gets caught using performance enhancing drugs versus that of an athlete with the same degree of endorsement acclaim who is alleged to have abused his spouse? By assigning a number that represents intensity to the scandal, we are able to account for this concept of severity of the scandal.  Currently we have a good amount of data we still need to unfold. Thus far along in the data collection, it is obvious that measuring the impact a player has on a company he endorses is very complex and intricate.  

 

The data we have collected has not raised any puzzling questions so far. Collecting the revenue and stock price numbers has been straightforward. The biggest challenge we have faced in the data collection process is determining when two key dates occurred: when the athlete acquired the endorsement, and when the athlete created the scandal. These dates are important in determining the trends of stock prices and revenues that occur after the addition of the athlete to the company all the way up to the day the scandal breaks out, and then from the day the scandal breaks out to the day the fall out tapers off. Another issue is that some of the subjects we planned to use in the research did not have prominent enough endorsements to include them in the study. For example, an ex-Cowboy named Greg Hardy was at the front of a domestic violence case that was a popular sports’ story last year. After hearing more and more about Greg Hardy’s case all throughout last football season, we figured he’d be an excellent subject to consider for our study. It took one search to find out he had no worthy endorsements to evaluate. That rules out Greg Hardy’s case.

 

On the other hand, we have experienced success. In our opinion, our biggest success thus far is coming to the realization of how detailed we can make this study. There are many microscopic variables that affect the economic relationship between an athlete and the company they endorse. This project is a conglomerate that we will continuously add different elements and variable topics to. Although this is a quantitative-heavy study, we have learned that the relationship that exists between athletes and their endorsements can also be analyzed qualitatively. What I mean is that when a scandal occurs it ultimately affects people, and these people perceive the scandal in a multitude of different ways. Companies that are sponsoring these athletes can argue that they had no clue about the athletes’ decadent behaviors prior to the endorsement deal. If the rest of the public perceives the situation the same way, then the economic impact created from the scandals’ occurrence could be slim to none.

 

The opportunity to complete this research has been advantageous to our academic careers. The final material we produce will truly be an asset to us. Currently, our immediate plans include continuing to add data to our set, and continuing to add detailed ideas about how scandals economically affect an athlete’s endorsement company. Our future goals include presenting it to an economics panel for potential publication.

A Taxonomy of Central Bank Data Visualization

Central Banks around the world release huge amounts of data continuously. This data is then used by policy makers, business owners, and the general public in order to make sensible choices regarding public policy, investment, and business decisions. The format in which central banks release their data varies. The purpose of this research project is to identify some of the major differences between how central banks choose to communicate with the public through their data releases and whether or not this has a significant impact on the economy and the financial system. In order to accomplish this, Professor Weinstock and I will be comparing the taxonomy of six main central banks including the Federal Reserve, the BOE, the BOJ, the Riksbank, the Swiss National Bank, and the ECB.
We hope to uncover the chronology in which central banking systems began a variety of visual communication tools such as fan charts, which are now commonly used by many central banks around the world. Despite the fact that many central banks use fan charts to visually represent their forecasts to the public, the manner in which these fan charts are created and displayed varies greatly from one central bank to another. We would like to determine which central banks have been more innovative than others and whether or not the marginal benefits of releasing such forms of data have had a positive impact on the economy. Our research involves indicating which central banks began such trends in transparency and communication to the public as well as which central banks followed in the footsteps of others. We will study the various forms of data releases and their effect on economic growth, employment, inflation, and financial stability.
How this data is then perceived by the public is what determines how the economy will grow or regress. Professor Weinstock and I are interested in the extent in which data visualization released by central banks impacts expectations and decision making. Central Bankers may try to influence the decisions made by investors by emphasizing some economic indicators while downplaying others. Central banks can alter the display of visual data. Does this affect how the public then responds to this data? Central banks know that a lot of smart people are watching them, so they may feel that there are some things they can get away with but they also want to be careful. This research definitely investigates central bank transparency and its impact on central bank credibility.

Blog Post # 2

For my research project with Professor Sean Daly, we have made significant progress toward coming to a conclusion on our research questions.

In our research, we have utilized various risk metrics such as the information ratio (set with the ACWI as a world benchmark) to arrive at our conclusions.  The information ratio is computed by dividing an ETF’s excess returns (“excess” beyond its benchmark return) by its “excess risk” (that volatility above and beyond the chosen benchmark).  It allows us to see if an ETF’s returns are really impressive compared to its benchmark and the added risk it entailed.

Most recently, we have measured two separate groups: the PIIGS – a group of developed European countries– and the “Urdanetas,” a name we use to describe a select group of Emerging Market ETFs that have outperformed most other nations in terms of growth.

The PIIGS acronym stands for Portugal, Italy, Ireland, Greece, and Spain.  With the exception of Ireland, these countries have information ratios that continue to decline even years after the initial PIIGs Crisis of 2010.  Though they are well-known and established markets, these countries are simply not delivering the right risk-adjusted returns from the standpoint of the universe of international ETFs that are now available to US investors.

Named after the famous Spanish navigator that pioneered the Acapulco-to-Manila trade route in the 16th century, the Urdanetas consist of Indonesia, Malaysia, Philippines, Thailand, Sri Lanka, Colombia, Peru and Chile. As a group, they have significantly out-performed their benchmark over the past three years.

As one collective group, these Emerging Markets have triple the annualized returns of the S&P 500 – with a much lower beta (vis a vis the ACWI) and less volatility.  This was surprising because during the past three years, there has been a real downturn in oil and commodities.  Some of these countries benefit from low commodity prices (the Philippines, Thailand, Malaysia) while others benefit from higher prices (Peru, Colombia, Chile), so there is an interesting internal balance at work.

When comparing the emerging markets of the Urdanetas, the only negative information ratio for the period was Sri Lanka. If as an investor, you were to remove Sri Lanka from this list, then the group would yield an information ratio of 1.48 during 2014-2016, which is remarkable.

Though hardly know to US investors, Peru had an information ratio of 3.24 for the period of 2014 to 2016 – a stunningly great achievement when you consider that anything positive is considered outperformance.

We have also been researching the Nigeria ETF (NGE) as possible “mean reversion” trade, comparing its monthly returns to the Brent futures market since its currency depreciation in June 2016.

Blog Post #2

After careful consideration, we have had to slightly alter our initial topic. Our main question is if the effect of cigarettes on the lifespan is linear. In order to analyze this, we much look at the significance of the squared term.

Professor Colman and I have put a considerable amount of work into preparing the perfect data set for our analysis. We have had to merge data and cut out certain parts of our data, such as age (we started at age 18), and smoker status (we did not include those who have never smoked and those who opted out of the survey). This preparation required a some additional resources from various Stata textbooks, along with the help of Professor Colman. From this, we have prepared a data set comprising of a sample of people, ages 18 to 96, who have taken the NHIS survey since 1986 until 2009. The survey gathers information about each person, including the person’s smoking habits and their age. Once they have this information, they count each year in the national death records to see if the person has died or not, which can be indicated as either yes or no. From this, we created a dummy variable for our analysis, marking “1” if the person has died, and “0” if the person is still alive.

Originally, before seriously delving into the data, we thought we were going to use general multiple regression analysis to conduct our research. However, we determined that a better, more specific analysis would be using Maximum Likelihood analysis. This type of analysis is a better choice for our project because we are looking at how a smoker’s smoking habits enables them to be at a higher risk (‘maximum likelihood’) of dying. With Maximum Likelihood, we utilize Newton’s Method, which approximates the natural-log of our likelihood function with a downward curved quadratic curve. All in all, this downward quadratic curve provides a maximum point (the highest point on the curve), which is the maximum likelihood. For our analysis, we are finding the maximum log-likelihood (we use log-likelihood instead of just a likelihood function because of the large sample size) due to each smoker of the sample’s smoking habits.

In addition to utilizing the maximum likelihood analysis, I also had to look into how survival analysis is conducted in Stata. For this, one looks at a survivor function, which would represent if the person lives, and a hazard function, which would represent the risk for the person to die. As one would presume, as risk increases over time, as the person ages more and smokes more cigarettes, the hazard increases as well, therefore the likelihood of them dying is increasing as well. And vice versa.

We are currently going through the interpretations of these results, which we are eager to include in our report next week.

Student-Faculty Summer Research- Blog Post #2

Since my last blog post, Dr. Tekula and I have made significant progress in our research. As previously mentioned, our two main sources for our data are The Global Hunger Index (GHI) and Commodity Systems Incorporated. Through Commodity Systems Incorporated, we were able to look at indicators related to the volatility of prominent food commodities, such as corn, wheat, orange juice, and soy meal. Such indicators per commodity include the daily volatility measured over a 1 and 5-year trailing return from holding the commodity future and abnormal volatility, which is a continuous indicator measuring how high recent volatility is compared to long-run volatility for a commodity. As for the GHI, Dr. Tekula and I ran into a small issue with collecting the data. We discovered that The Global Hunger Index reports began being published annually by The International Food Policy Research Institute IFPRI in 2006. Prior to that, they calculated their scores using data from sources such as UNICEF, The World Bank, and the Food and Agriculture Organization. Seeing as we wanted our data to go as far back in time as possible, we decided to construct our own dataset by pulling information from each Global Hunger Index report. After cleaning the data, our final dataset includes 131 countries, eight years of GHI Scores, the prevalence of undernourishment in children and adults, under-five mortality rate, and child wasting and stunting over the past 21 years.

Additionally, I have spent a large portion of my time researching scholarly journal articles related to our topic and learning an immense amount of information. I came across authors who have conducted research similar to ours, yet not identical, making it a valuable resource of information to add to our literature review. For example, some focused on the micro-effects of surging food prices in regions such as Sub-Saharan Africa or Latin America. Reading in-depth studies that have focused their analysis on a specific region has provided me with an advanced understanding on how the spikes in food prices may directly impact local hunger, but also how the effects may be different per geographical region. These scholarly articles also give us a method to follow for our methodology section.

While Dr. Tekula and I are still in the process of making our final conclusions, I have been experimenting in Stata and Excel with the data. After taking into consideration the fact that countries vary in size, I created a weight for each country in STATA, depending on their population. Using the United Nation Population Division database, I was able to match our 131 countries with its population size to produce weighted and unweighted GHI scores for each country. This gives us a more accurate picture of the average GHI score per country when comparing it to the volatility indicators.

This research opportunity has been extremely beneficial and exciting for me, as I have gained an incredible amount of knowledge on a wide variety of topics that truly interest me. I have accomplished the goals that I set for myself in my previous blog post, such as gaining confidence in my research skills and exposure to new methods to conducting effective research in the future, as well as more practice in Stata and Excel. I truly believe that this paper, once completed, will be a phenomenal piece to showcase my skills and interests to potential employers as I prepare to enter the job market in May. Most importantly, I have sincerely enjoyed working closely with Dr. Tekula throughout this summer. Her expertise, guidance, and support have greatly benefited me, both personally and professionally. I look forward to concluding this project with Dr. Tekula and presenting our findings at events such as Research Day.

UGRI post 2

Discuss progress made so far. Describe data and/or results and findings. Provide insights and reflections on data and/or results and findings / What did you learn from the project?

My literature research focused on HPLC settings for analyzing Amphetamines. All of the articles used a C18 column as the stationary phase but had slightly different mobile phases. One article described the mobile phase as “consisting of methanol, 0.1 % (v/v) triethylamine (adjusted to pH 4.50 with acetic acid) aqueous solution (45:55)” (A1), another consisted of “solvent A: water with 2 mM ammonium formate/0.2% formic acid, solvent B: AcN 100%” (A3). Other mobile phases consisted of “methanol with 0.1 % formic acid in water” (A4) and “methanol containing 10% of acetonitrile (phase A) and 5 mM of formic acid in water (phase B)” (A5). Many articles used Methanol, acetonitrile and formic acid in the mobile phases which suggests these chemicals are more reliable, accurate or easily obtained than others.

Some articles also provided a optimal column temperature for HPLC analysis. A general guideline suggested a temperature between 15-35oC while one article stated the temperature at 40oC and another at 50oC. Reading the recovery percentages for the experiments using these temperatures showed that in the experiment using a column temperature of 40oC the recoveries were averaging 75-100% (A3) while the experiment with a column temperature of 50oC had recoveries less that 80% (A4). While this doesn’t show causation, it can be seen as a correlation suggesting that lower temperatures around 30-40oC may provide a better environment for separation and analysis of illicit drugs by HPLC.

Another similarity between the articles examined was the flow rate the injection volume of samples. Most of the articles stated the flow rate used between 0.1 and 0.3 mL/min (A1, A3, A4, A5), and used an injection volume of 0.01-0.1 mL. The consistency of these articles suggests that these volumes and rates give the best data and analysis and provides a starting point for further experiments in method development.

For the best results it is recommended to use a gradient or elution process which differs depending on the solutions used. Most articles which included HPLC used this process. For example A3 described the elution used as “0–1 min: 98% A, the organic phase B then increased to 98% within 10 min. After that, the column was cleaned with 98% mobile phase B for 2 min.” The differences between elution processes used makes it difficult to determine which timescales and settings provide the best results. For future experiments this process may have to be determined experimentally.

Include any questions raised from collected data

Not all experiments examined in this project used SPE as the sample collection/purification process as we had planned to use. The one questioned raised through this research was if there was a better way to extract and purify spiked samples and if so, how are they better and why?

Explain any challenges and/or successes you’ve experienced with the project

When looking for articles and compiling others research it was difficult to find exactly what I wanted so I had to piece together the information I wanted from numerous articles. My focus was not directly discussed in the articles I was able to find which made searching for the info I wanted even harder

Reflect on impact the project had on you and any future plan you may have related to this project

This project was focused on literary readings and research. Because of this, I have become more adept at reading and understanding scientific literature and research papers. The results from this project can now be used to continue laboratory research with the focus of developing a method for illicit drug analysis using SPE and HPLC techniques.