text
stringlengths
2k
32k
# Emergency department use and Artificial Intelligence in Pelotas: design and baseline results ## RESUMO ### Objetivo: To describe the initial baseline results of a population-based study, as well as a protocol in order to evaluate the performance of different machine learning algorithms with the objective of predicting the demand for urgent and emergency services in a representative sample of adults from the urban area of Pelotas, Southern Brazil. ### Methods: The study is entitled “Emergency department use and Artificial Intelligence in PELOTAS (RS) (EAI PELOTAS)” (https://wp.ufpel.edu.br/eaipelotas/). Between September and December 2021, a baseline was carried out with participants. A follow-up was planned to be conducted after 12 months in order to assess the use of urgent and emergency services in the last year. Afterwards, machine learning algorithms will be tested to predict the use of urgent and emergency services over one year. ### Results: In total, 5,722 participants answered the survey, mostly females ($66.8\%$), with an average age of 50.3 years. The mean number of household people was 2.6. Most of the sample has white skin color and incomplete elementary school or less. Around $30\%$ of the sample has obesity, $14\%$ diabetes, and $39\%$ hypertension. ### Conclusion: The present paper presented a protocol describing the steps that were and will be taken to produce a model capable of predicting the demand for urgent and emergency services in one year among residents of Pelotas, in Rio Grande do Sul state. ## Objetivo: Descrever os resultados iniciais da linha de base de um estudo de base populacional, bem como um protocolo para avaliar o desempenho de diferentes algoritmos de aprendizado de máquina, com o objetivo de predizer a demanda de serviços de urgência e emergência em uma amostra representativa de adultos da zona urbana de Pelotas, no Sul do Brasil. ## Métodos: O estudo intitula-se “Emergency department use and Artificial Intelligence in PELOTAS (RS) (EAI PELOTAS)” (https://wp.ufpel.edu.br/eaipelotas/). Entre setembro e dezembro de 2021, foi realizada uma linha de base com os participantes. Está previsto um acompanhamento após 12 meses para avaliar a utilização de serviços de urgência e emergência no último ano. Em seguida, serão testados algoritmos de machine learning para predizer a utilização de serviços de urgência e emergência no período de um ano. ## Resultados: No total, 5.722 participantes responderam à pesquisa, a maioria do sexo feminino (66,$8\%$), com idade média de 50,3 anos. O número médio de pessoas no domicílio foi de 2,6. A maioria da amostra tem cor da pele branca e ensino fundamental incompleto ou menos. Cerca de $30\%$ da amostra estava com obesidade, $14\%$ com diabetes e $39\%$ eram hipertensos. ## Conclusão: O presente trabalho apresentou um protocolo descrevendo as etapas que foram e serão tomadas para a produção de um modelo capaz de prever a demanda por serviços de urgência e emergência em um ano entre moradores de Pelotas, no estado do Rio Grande do Sul. ## INTRODUCTION Chronic diseases affect a large part of the population of adults and older adults, leading these individuals to seek urgent and emergency care. The implementation in 1988 of the Unified Health System (SUS) resulted in a model aimed at prevention and health promotion actions based on collective activities 1 – starting at Basic Health Units (UBS). There is also the National Emergency Care Policy, which advanced in the construction of the SUS, and has as guidelines universality, integrity, decentralization, and social participation, alongside humanization, the right of every citizen 2. In a study that evaluated the characteristics of users of primary health care services in a Brazilian urban-representative sample, it was found that the vast majority were women and part of poorer individuals, in addition to almost $\frac{1}{4}$ of the sample receiving the national income distribution program (family allowance) 3. Brazil is a country highly unequal in socioeconomic terms; approximately $75\%$ of the Brazilian population uses the SUS and depends exclusively on it, and do not have private health insurance 4,5. Individuals with multimorbidity are part of the vast majority who seek urgent and emergency services 6. Multimorbidity is a condition that affects a large part of the population 7, especially older adults 7. In addition, the association of multimorbidity with higher demand for emergency services is a challenge to appropriately manage and prevent these problems 8,9. Innovative approaches may allow health professionals to provide direct care to individuals who are more likely to seek urgent and emergency services. The use of artificial intelligence can make it possible to identify and monitor a group of individuals with a higher probability of developing multimorbidity. In this context, machine learning (ML), an application of artificial intelligence, is a promising and feasible tool to be used on large scale to identify these population subgroups. Some previous studies have demonstrated that ML models can predict the demand for urgent and emergency services 10,11. Besides, a systematic review showed that ML could accurately predict the triage of patients entering emergency care 12. However, in a search for studies in Brazil, we found no published article on the subject. In Brazil, urgent and emergency services are a fundamental part of the health care network, ensuring timely care in cases of risk to individuals’ lives 9. Urgent and emergency services are characterized by overcrowding and high demand. In addition, with the current pandemic of COVID-19, updated evidence on the characteristics of the users seeking these services is timely and necessary. The objective of this article was to describe the initial baseline results of a population-based study, as well as a protocol in order to evaluate the performance of different ML algorithms with the objective of predicting the demand for urgent and emergency services in a representative sample of adults from the urban area of Pelotas. ## METHODS The present cohort study is entitled “Emergency department use and Artificial Intelligence in PELOTAS-RS (EAI PELOTAS)” (https://wp.ufpel.edu.br/eaipelotas/). The baseline was conducted between September and December 2021, and a follow-up was planned to be conducted 12 months later. We utilized the cross-sectional study to measure the prevalence of urgent and emergency care and the prevalence of multimorbidity, in addition to other variables and instruments of interest. The prospective cohort design intends to estimate the risk of using and reusing urgent emergency services after 12 months. Contact information, collected to ensure follow-up, included telephone, social networks, and full address. In addition, we also collected the latitude and longitude of households for control of the interviews. ## Study location and target population The present study was conducted in adult households in the Pelotas, Rio Grande do Sul (RS), Southern Brazil. According to estimates by the Brazilian Institute of Geography and Statistics (IBGE) in 2020, Pelotas had an estimated population of 343,132 individuals (https://cidades.ibge.gov.br/brasil/rs/pelotas/panorama). Figure 1 shows the location of the city of Pelotas in Brazil. **Figura 1.:** *Map of Brazil highlighting the city of Pelotas (RS).* Pelotas has a human development index (HDI) of 0.739 and a gross domestic product per capita (GDP) of BRL 27,586.96 (https://www.ibge.gov.br/cidades-e-estados/rs/pelotas.html). The municipality has a Municipal Emergency Room that operates 24 hours a day, seven days a week, and serves about 300 patients a day, according to data provided by the unit. ## Criteria for inclusion and exclusion of study participants We included adults aged 18 years or older residing in the urban area of Pelotas. Children and individuals who were mentally unable to answer the questionnaire were not included in the sample. ## Sample calculation, sampling process, and data collection The sample size was calculated considering three objectives. First, to determine the sample size required to assess the prevalence of urgent and emergency services use, it was considered an estimated prevalence of $9\%$, with±two percentage points as a margin of error and a $95\%$ confidence level 13, concluding that 785 individuals would be necessary. Second, for multimorbidity prevalence, an estimated prevalence of $25\%$, with ± three percentage points as a margin of error and a confidence level of $95\%$ was used 14,15; reaching again, a total of 785 individuals needed. Finally, for the association calculations, similar studies in Brazil were assessed, and the following parameters were considered: significance level of $95\%$, power of $80\%$, exposed/unexposed ratio of 0.1, percentage of the outcome in the unexposed $20\%$, and a minimum prevalence ratio of 1.3. With these parameters, 5,104 individuals would be necessary to study the proposed associations. Adding 10 to $20\%$ for losses and/or refusals, the final sample size would be composed of 5,615–5,890 participants. The process to provide a population-based sample was carried out in multiple stages. The city of Pelotas has approximately 550 census tracts, according to the last update estimates provided by IBGE in 2019. From there, we randomly selected 100 sectors. Since the sectors vary in size, we defined a proportional number of households for each. Thus, it was estimated that, in total, the 100 sectors had approximately 24,345 eligible households. To interview one resident per household, we divided the total number of households by the sample size required, which resulted in 4.3. Based on this information, we divided each of the 100 sectors by 4.3 to reach the necessary number of households for each sector. One resident per household was interviewed, resulting in a total of 5,615 households. If there was more than one eligible resident, the choice was made by a random number generator application. Residents were placed in order, a number was assigned for each one, and one of them was selected according to the result of the draw. The first household interviewed in each sector was selected through a draw, considering the selected jump (4.3 households). Trades and empty squares were considered ineligible, and thus, the next square was chosen. Due to a large number of empty houses, it was necessary to select another 50 sectors to complete the required sample size. The additional households were drawn according to the same methodological criteria as the first draw to ensure equiprobability. ## Data collection instrument We collected the data with the Research Electronic Data Capture (REDCap), a data collection program using smartphones 16,17. Experienced and trained research assistants collected the data. The questionnaire from EAI PELOTAS was prepared, when possible, based on standardized instruments, including questions about chronic diseases, physical activity, food security, use of urgent and emergency services, functional disability, frailty syndrome, self-perception of health, COVID-19, in addition to sociodemographic and behavioral questions. Supplementary Table 1 shows the instruments utilized in the present study. **Table 1.** | Characteristics | EAI PELOTAS* | EAI PELOTAS*.1 | PNS 2019† | | --- | --- | --- | --- | | Characteristics | Crude % (95%CI) | Survey design % (95%CI) | % (95%CI) | | Mean age, years | 50.3 (49.9–50.8) | 46.2 (45.5–47.0) | 46.7 (45.9–47.5) | | Mean number of household people | 2.6 (2.5–2.7) | 2.7 (2.6–2.8) | 3.0 (2.9–3.1) | | Female (%) | 66.8 (65.6–68.0) | 54.2 (52.4–55.6) | 54.1 (51.7–56.4) | | Skin color (%) | Skin color (%) | Skin color (%) | Skin color (%) | | White | 78.2 (77.1–79.2) | 77.3 (74.9–79.5) | 76.8 (74.6–78.7) | | Black | 15.0 (14.1–16.0) | 15.3 (13.5–17.3) | 8.3 (7.0–9.8) | | Brown | 6.1 (5.5–6.7) | 6.7 (5.7–7.9) | 14.5 (12.9–16.3) | | Other | 0.7 (0.5–1.0) | 0.7 (0.4–1.1) | 0.4 (0.2–0.8) | | Schooling (%) | Schooling (%) | Schooling (%) | Schooling (%) | | Incomplete elementary school or less | 35.7 (34.5–37.0) | 31.3 (28.6–34.2) | 30.2 (28.1–32.4) | | Complete elementary school/incomplete high school | 16.2 (15.3–17.2) | 16.4 (15.1–17.7) | 15.7 (14.0–17.5) | | Complete high school/incomplete higher education | 33.5 (32.3–34.7) | 37.6 (35.6–39.6) | 36.9 (34.6–39.2) | | Complete higher education or more | 14.6 (13.7–15.5) | 14.7 (12.4–17.4) | 17.2 (15.7–18.9) | ## Dependent variables The use of urgent and emergency services was assessed on a baseline using the following question: “In the last 12 months, how many times have you sought urgent and emergency services, such as an emergency room?”. This was followed by the characterization of the service used, city of service, frequency of use, and referral after use. One year after the study baseline, we will contact again the respondents to inquire about the use of urgent and emergency care services (number of times and type of service used). ## Independent variables We assessed multimorbidity as the main exposure using a list of 22 chronic diseases and others (asthma/bronchitis, osteoporosis, arthritis/arthrosis/rheumatism, hypertension, diabetes, cardiac insufficiency, pulmonary emphysema/chronic obstructive pulmonary disease, acute kidney failure, Parkinson’s disease, prostate disease, hypo/hyperthyroidism, glaucoma, cataract, Alzheimer’s disease, urinary/fecal incontinence, angina, stroke, dyslipidemia, epileptic fit/seizures, depression, gastric ulcer, urinary infection, pneumonia, and the flu). The association with urgent and emergency services will be performed with different cutoff points, including total number, ≥2, ≥3, and combinations of morbidities. We will also perform network analyzes to assess the pattern of morbidities. Other independent variables were selected from previous studies in the literature 18-21, including demographic, socioeconomic information, behavioral characteristics, health status, access, use and quality of health services. ## Data analysis We will test artificial intelligence algorithms, ML, to predict the use of urgent and emergency services after 12 months. The purpose of ML is to predict health outcomes through the basic characteristics of the individuals, such as sex, education, and lifestyle. The algorithms will be trained to predict the occurrence of health outcomes, which will contribute to decision-making. With a good amount of data and the right algorithms, ML may be able to predict health outcomes with satisfactory performance. The area of ML in healthcare has shown rapid growth in recent years, having been used in significant public health problems such as diagnosing diseases and predicting the risk of adverse health events and deaths 22-24. The use of predictive algorithms aims to improve health care and support decision-making by health professionals and managers. For the present study, individuals’ baseline characteristics will be used to train popular ML algorithms such as Support Vector Machine (SVM), Neural Networks (ANNs), Random Forests, Penalized Regressions, Gradient Boosted Trees, and Extreme Gradient Boosting (XGBoost). These models were chosen based on a previous review in which the authors identified the most used models in healthcare studies 25. We will use the Python programming language to perform the analyzes. To test the predictive performance of the algorithms in new unseen data, individuals will be divided into training ($70\%$ of patients, which will be used to define the parameters and hyperparameters of each algorithm) and testing ($30\%$, which will be used to test the predictive ability of models in new data). We will also perform all the preliminary steps to ensure a good performance of the algorithms, especially those related to the pre-processing of predictor variables, such as the standardization of continuous variables, separation of categorical predictors with one-hot encoding, exclusion of strongly correlated variables, dimension reduction using principal component analysis and selection of hyperparameters with 10-fold cross-validation. Different metrics will evaluate the predictive capacity of the models, the main one being the area under the receiver operating characteristic (ROC) curve (AUC). In a simplified way, the AUC is a value that varies from 0 to 1, and the closer to 1 the better the model’s predictive capacity 26. The other metrics will be F1-score, sensitivity, specificity, and accuracy. As measures of model fit, we will perform hyperparameters and balancing fit, as well as K-fold (cross-validation). ## COVID-19 The current pandemic, caused by the SARS-CoV-2 virus, has brought uncertainty to the world population. Although vaccination coverage is already high in large parts of the population, the arrival of new variants and the lack of other essential measures to face the pandemic still create uncertainty about the effects of the pandemic on people. General questions about symptoms, tests, and possible effects caused by coronavirus contamination were included in our baseline survey. We will also use SARS-CoV-2-related questions to evaluate the performance of ML algorithms. In September 2021, restrictive measures were relaxed due to a decrease in COVID-19 cases in Pelotas, allowing the study to begin. A vaccination passport was required from the interviewers to ensure the safety of both participants and interviewers. In addition, all interviewers received protective equipment against COVID-19, including masks, face shields, and alcohol gel. Finally, the interviewers were instructed to conduct the research in an open and airy area, ensuring the protection of the participants. ## Quality assurance and control The activities to allow for control and data quality were characterized by a series of measures aimed at ensuring results without the risk of bias. Initially, we developed a research protocol, followed by an instruction manual for each interviewer. Thereafter, interviewers were trained and standardized in all necessary aspects. REDCap was also important to garanteee the control and quality of responses as the questions were designed using validation checks according to what was expected for each answer. Another measure that ensured the control of interviews was the collection of latitude and longitude of households, which was plotted by two members of the study coordination weekly on maps, to ensure that the data collection was performed according to the study sample. With latitude and longitude data, it is also intended to carry out spatial analysis articles with techniques such as sweep statistics and Kernel. The database of the questions was checked daily to find possible inconsistencies. Finally, two members of the study coordination made random phone calls to $10\%$ of the sample, in which a reduced questionnaire was applied, with the objective of comparing the answers with the main questionnaire. ## Ethical principles We carried out this study using free and informed consent, as determined by the ethical aspects of Resolution No. $\frac{466}{2012}$ of the National Council of the Ministry of Health and the Code of Ethics for Nursing Professionals, of the duties in Chapter IV, Article 35, 36 and 37, and the prohibitions in chapter V, article 53 and 54. After identifying and selecting the study participants, they were informed about the research objectives and signed the Informed Consent Form (ICF). The project was referred to the Research Ethics *Committee via* the Brazilian platform and approved under the CAAE 39096720.0.0000.5317. ## Schedule Initially, we conducted a stage for the preparation of an electronic questionnaire at the beginning of 2021. In February 2021, we initiated data collection after preparing the online questionnaire. The database verification and cleaning steps occurred simultaneously with the collection, and continued until March 2022. After this step, data analysis and writing of scientific articles began. ## First descriptive results and comparison with a population-based study Of approximately 15,526 households approached, 8,196 were excluded — 4,761 residents were absent at the visit, 1,735 were ineligible, and 1,700 were empty (see Figure 2). We identified 7,330 eligible participants, of which 1,607 refused to participate in the study, totalizing 5,722 residents. Comparing the female gender percentage of the refusals with the completed interviews, we observed a slightly lower prevalence with $63.2\%$ ($95\%$CI 60.7–65.5) among the refusals, and $66.8\%$ ($95\%$CI 65.6–68.0) among the complete interviews. The mean age was similar between participants who agreed to participate (50.3; $95\%$CI 49.9–50.8) and those who refused (50.4; $95\%$CI 49.0–51.9). **Figura 2.:** *Flowchart describing the sampling process.* To evaluate the first descriptive results of our sample, we compared our results with the 2019 Brazilian National Health Survey (PNS) database. The PNS 2019 was collected by the IBGE in partnership with the Ministry of Health. The data are in the public domain and are available in the IBGE website (https://www.ibge.gov.br/). To ensure the greatest possible comparability between studies, we used only residents of the urban area of the state of Rio Grande do Sul, aged using the command svy from Stata, resulting in 3,002 individuals (residents selected to interview). We developed two models to compare our data with the PNS 2019 survey: Crude model (crude results from the EAI PELOTAS study, without considering survey design estimates); Model 1 using survey design: primary sampling units (PSUs) using census tracts as variables and post-weight variables based on estimates of Pelotas population projection for 2020 (Table 1). We evaluated another model using individual sampling weight (i.e., the inverse of the probability of being interviewed in each census tract). These models are virtually equal to the above estimates (data not shown). The mean age of our sample was 50.3 years (Table 1), 46.2 for model 1, which was similar to PNS 2019 (46.7 years). Our weighted estimates presented a similar proportion of females compared to the PNS 2019 sample. The proportions of skin colors were similar in all categories and models. Our crude model presented a higher proportion of participants with incomplete elementary school or less compared to model 1 and PNS 2019. Table 2 describes the prevalence of chronic diseases and lifestyle factors in our study and the PNS 2019 sample. Our prevalence of diabetes was higher in the crude model compared to weighted estimates and PNS 2019 sample. In both models, we had a higher proportion of individuals with obesity and hypertension than in PNS 2019. Asthma and/or bronchitis presented similar proportions in our results compared to PNS 2019; the same occurred for cancer. Our study presented a higher proportion of smoking participants in both models than in the PNS 2019 sample. **Table 2.** | Chronic diseases and lifestyle factors | EAI PELOTAS* | EAI PELOTAS*.1 | PNS 2019† | | --- | --- | --- | --- | | Chronic diseases and lifestyle factors | Crude | Survey design 1 | PNS 2019† | | Chronic diseases and lifestyle factors | % (95%CI) | % (95%CI) | % (95%CI) | | Diabetes | 14.2 (13.3–15.1) | 11.5 (10.6–12.4) | 9.0 (8.9–11.1) | | Obesity | 30.4 (29.2–31.7) | 29.2 (27.7–30.8) | 24.8 (22.6–27.1) | | Hypertension | 39.0 (37.7–40.3) | 32.4 (31.0–33.9) | 28.1 (25.9–30.5) | | Asthma or chronic bronchitis | 9.3 (8.6–10.1) | 9.3 (8.4–10.4) | 8.7 (7.3–10.3) | | Cancer | 4.2 (3.7–4.7) | 3.4 (2.9–4.0) | 3.8 (2.9–4.9) | | Current smoking | 20.6 (19.6–21.7) | 20.4 (18.9–22.0) | 16.3 (14.6–18.1) | ## DISCUSSION We described the initial descriptive results, methodology, protocol, and the steps required to perform the ML analysis for predicting the use of urgent and emergency services among the residents of Pelotas, Southern Brazil. We expect to provide subsidies to health professionals and managers for decision-making, helping to identify interventions targeted at patients more likely to use urgent and emergency services, as well as those more likely to develop multimorbidity and mortality. We also expect to help health systems optimize their space and resources by directing human and physical capital to those at greater risk of developing multiple chronic diseases and dying. Recent studies in developed countries have found this a feasible challenge with ML 21,27. If our study presents satisfactory results, we intend to test its practical applicability and acceptance to assist health professionals and managers in decision-making in emergency services among residents of Pelotas. The baseline and methods used to select households resemble the main population-based studies conducted in Brazil, such as the Brazilian Longitudinal Study of Aging (ELSI-Brazil) 28, the EPICOVID 29, and the PNS. The applicability of ML requires suitable predictive variables. Our study included sociodemographic and behavioral variables related to urgent and emergency services, and chronic diseases. EAI PELOTAS study also includes essential topics that deserve particular importance during the COVID-19 pandemic, such as food insecurity, decreased income, physical activity, access to health services, and social support. We also presented one weighting option in order to obtain sample estimates considering the complex study design. All estimates have their strength and limitation. Each research question answered through this study may consider these possibilities and choose the most suitable one. The estimates were similar without weighting and those considering the primary sampling unit (PSU) and sampling weight. Using the census tract in the PSU is fundamental to consider the sampling design in the estimates of variability (standard error, variance, $95\%$CI, among others). In addition, due to the possible selection bias in the sample, which contains more women and older people than expected, the use of a post-weighting strategy becomes necessary to obtain estimates adjusted for the sex and age distributions of the target population (due to the lack of census data, we used population projections). However, it should be noted that this strategy can produce estimates simulating the expected distribution only by sex and age. Still, we do not know how much this strategy can distort the estimates since the demographic adjustment cannot reproduce adjustment in all sample characteristics, especially for non-measured variables that may have influenced the selection of participants. Thus, we recommend defining the use of each strategy on a case-by-case basis, depending on the objective of the scientific product. Finally, we suggest reporting the different estimates according to the sample design for specific outcomes (e.g., the prevalence of a specific condition) that aim to extrapolate the data to the target population (adults of the city of Pelotas). In conclusion, the present article presented a protocol describing the steps that were and will be taken to produce a model capable of predicting the demand for urgent and emergency services in one year among residents in Pelotas (RS), Southern Brazil.
# Alterations in Fecal Microbiota Linked to Environment and Sex in Red Deer (Cervus elaphus) ## Abstract ### Simple Summary The gut microbiota forms a complex microecosystem in vertebrates and is affected by various factors. Wild and captive red deer currently live in the same region but have vastly different diets. In this study, the 16S rRNA sequencing technology was performed to evaluate variations in the fecal microbiota of wild and captive individuals of both sexes of red deer. It was found that the composition and function of fecal microbiota in wild and captive environments were significantly different. As a key intrinsic factor, sex has a persistent impact on the formation and development of gut microbiota. Overall, this study reveals differences in the in the fecal microbiota of red deer based on environment and sex. These data could guide future applications of population management in red deer conservation. ### Abstract Gut microbiota play an important role in impacting the host’s metabolism, immunity, speciation, and many other functions. How sex and environment affect the structure and function of fecal microbiota in red deer (Cervus elaphus) is still unclear, particularly with regard to the intake of different diets. In this study, non-invasive molecular sexing techniques were used to determine the sex of fecal samples from both wild and captive red deer during the overwintering period. Fecal microbiota composition and diversity analyses were performed using amplicons from the V4–V5 region of the 16S rRNA gene sequenced on the Illumina HiSeq platform. Based on Picrust2 prediction software, potential function distribution information was evaluated by comparing the Kyoto Encyclopedia of Genes and Genome (KEGG). The results showed that the fecal microbiota of the wild deer (WF, $$n = 10$$; WM, $$n = 12$$) was significantly enriched in Firmicutes and decreased in Bacteroidetes, while the captive deer (CF, $$n = 8$$; CM, $$n = 3$$) had a significantly higher number of Bacteroidetes. The dominant species of fecal microbiota in the wild and captive red deer were similar at the genus level. The alpha diversity index shows significant difference in fecal microbiota diversity between the males and females in wild deer ($p \leq 0.05$). Beta diversity shows significant inter-group differences between wild and captive deer ($p \leq 0.05$) but no significant differences between female and male in wild or captive deer. The metabolism was the most important pathway at the first level of KEGG pathway analysis. In the secondary pathway of metabolism, glycan biosynthesis and metabolism, energy metabolism, and the metabolism of other amino acids were significantly different. In summary, these compositional and functional variations in the fecal microbiota of red deer may be helpful for guiding conservation management and policy decision-making, providing important information for future applications of population management and conservation. ## 1. Introduction Red deer (Cervus elaphus), which belong to the family Cervidae, order Artiodactyla, distributed in Asia, Europe, North America, and North Africa [1]. The red deer is a typical forest-inhabiting mammal in northeast China and has an important ecological status in the forest ecosystem [2]. Owing to habitat fragmentation, the populations of red deer in the wild are currently in sharp decline [2]. Using captive populations as reintroduction resources is an effective strategy to restore the populations of wild red deer [3]. The complex gut microbiota systems in the mammalian gut are composed of large fractions of microbes [4]. The gut microbiota are a complex product of the long-term evolution of hosts and microbes [4]. Recent studies have shown that not only are gut microbiota a part of the host, but they also have a significant impact on the health of the host, such as promoting immunity, digestion, metabolism, and intestinal endocrine hormones, among others [5,6,7]. Simultaneously, the complex and flexible gut microbiota can be affected by multiple environmental and host genotypes [8]. Many studies have shown that diet is an important factor that affects the structure and function of the fecal microbiota [9,10,11]. For example, changes in diet alter the function and diversity of fecal microbiota as well as the relative abundance of some microorganisms [12]. Moreover, diet-induced loss of microbial function and diversity will increase the risk of diversity loss and extinction through generational amplification [13]. It was necessary to investigate the gut microbiome by comparing differences between wild and captive red deer. However, to date, there has been a lack of studies comparing the gut microbiota between wild and captive red deer [11]. Because of sex differences in behavior and physiology, sex as an important intrinsic factor leads to differences in gut microbiota among individuals within species [14,15,16]. Although the results are inconsistent, animal species with significant sexual dimorphism and human studies have shown sex-related differences in gut microbiota. In mice (Mus musculus), poultry, and forest musk deer (Moschus berezovskii), the composition of the gut or fecal microbiota shows sex differences [17,18,19]. At present, few studies have analyzed the sexual dimorphism of fecal microbiota in red deer. In order to save endangered populations, artificial breeding of wild populations is carried out. The food types and nutrient intake ratios obtained in captivity and wild environments are very different, especially for endangered cervidae [20]. Therefore, monitoring the digestive system of captive animals and identifying standardized levels of nutritional requirements and fiber composition is critical for captive wild animals to determine whether they have acclimated to artificially provided food and new environments—a part of wildlife conservation’s main problem [21]. Using captive populations as reintroduction resources is an effective strategy to restore the populations of wild red deer. The composition of gut microbiota in wild populations can be a good indicator of the breeding direction of the captive population [9]. Therefore, understanding the impact of dietary differences between wild and captive red deer on the fecal microbiota can help to assess and ensure the long-term viability of this species [9]. At present, the research methods for fecal microbiota have also shifted from traditional methods to 16S rRNA gene sequencing technology, from simple microbial composition, community structure, and core microbiota research to microbial function research, which has become a hot frontier in ungulate research today [22]. The main goal of this study was to characterize the composition of the fecal microbiota of red deer of different sex and feeding plus environment. We used high-throughput 16S rRNA sequencing technology to comprehensively analyze. Thus, we hypothesized that: [1] the fecal microbiota composition and function are different between wild and captive deer; and [2] under the wild or captive environment, the microbiota diversity and evenness are different between females and males. ## 2.1. Study Site, Subjects, and Sample Collection This study was conducted at the Gaogestai National Nature Reserve in Chifeng, Inner Mongolia (119°02′30″, 119°39′08″ E; 44°41′03″, 45°08′44″ N). The total area is 106,284 hm2. It is a typical transition zone forest-steppe ecosystem in the southern foothills of Greater Khingan Mountains, including forests, shrubs, grasslands, wetlands, and other diverse ecosystems. In February 2019, 75 line transects were randomly laid in the Gogestai protection area. Positive and reverse footprint chain tracking was carried out after the foodprints of red deer were found through line transect investigation. Disposable PE gloves were worn to collect red deer feces. While tracking the footprint chain, set 2 m × 2 m plant quadrate every 200 m to 250 m along the footprint chain, and collect all kinds of plant branches eaten by deer in the quadrate as far as possible [23]. A total of 162 fecal samples were collected and stored at −20 °C within 2 h. The feces of red deer from different areas of the Reserve were identified as coming from different individuals, and 43 feces were identified individually in the laboratory. In February 2019, the HanShan Forest Farm in Chifeng City, Inner Mongolia, China (adjacent to the Gaogestai Nature Reserve) had a total of 11 healthy adult red deer of similar age and size. Ear tags were used to differentiate each individual red deer. Through continuous observation, feces were collected immediately after excretion by different red deer individuals and stored at −20 °C. We measured crude protein, energy, neutral detergent fiber (NDF), and total non-structural carbohydrates in red deer diets. ## 2.2. Individual Recognition and Sex Identification We used a qiaamp DNA Fecal Mini-kit (QIAGEN, Hilden, Germany) to extract host deoxyribonucleic acid (DNA) from the fecal samples of red deer as previously described [24]. Microsatellite PCR technology was used with nine pairs of microsatellite primers (BM848, BMC1009, BM757, T108, T507, T530, DarAE129, BM1706, and ILST0S058) [25,26] with good polymorphism that were selected based on the research results of previous studies. These nine pairs of primers can amplify fecal DNA stably and efficiently. A fluorescence marker (TAMRA, HEX, or FAM) was added to the 5′ end of upstream primers at each site (Supplementary Table S1). Primer information, PCR amplification, and genotype identification procedures are described in the literature [27]. Multi-tube PCR amplification was used for genotyping [28], and 3–4 positive amplifications were performed for each locus to determine the final genotype [29]. The excel microsatellite toolkit [30] was used to search for matching genotypes from the data. Samples are judged to be from the same individual if all loci have the same genotype or if only one allele differs at a locus. The microsatellite data were analyzed by Cervus 3.0 software, and the genotyping was completed [31]. Male and female individuals were identified by detecting the existence of genes after the individual identification of red deer was completed. *Sry* gene primers (F:5′-3′ TGAACGCTTTCATTGTGTGGTC; R:5′-3′ GCCAGTAGTCTCTGTGCCTCCT) were designed, and the amplification system was determined. To minimize the occurrence of false positives or false negatives that could affect results, the *Sry* gene was repeated three times to expand and increase during the experiment, and samples with target bands that appeared on the second and third occasions were determined to be male [32]. ## 2.3. Fecal Microbiota DNA Extraction, Amplification, and Sequencing The total microbial DNA of fecal samples was extracted using an E.Z.N.A® Soil DNA Kit (Omega Bio-Tek, Norcross, GA, USA). The DNA integrity of the extracted samples was determined by $1\%$ agarose gel electrophoresis. Targeting a 420 bp fragment encompassing the V4-V5 region of the bacterial 16S ribosomal RNA gene was amplified by PCR using primers 515F (5′-GTG CCA GCM GCC GCG GTA A-3′) and 907R (5′-CCG TCA ATT CMT TTR AGT TT-3′). NEB 154 Q5 DNA high-fidelity polymerase (NEB, Ipswich, MA, USA) was used in PCR amplifications (Supplementary Table S1). A 1:1 mixture containing the same volume of 1XTAE buffer and the PCR products were loaded on a $2\%$ agarose gel for electrophoretic detection. PCR products were mixed in equidensity ratios. Then, the mixture of PCR products was purified using the Quant-iTPicoGreen dsDNA Assay Kit (Invitrogen, Carlsbad, CA, USA). Sequencing libraries were generated using the TruSeq Nano DNA LT Library Prep kit (Illumina, San Diego, CA, USA) following the manufacturer’s recommendations, and index codes were added. The library’s quality was assessed on the Agilent 5400 (Agilent Technologies Co. Ltd., Santa Clara, CA, USA). At last, the library was sequenced on an Illumina NovaSeq 6000 platform, and 250 bp paired-end reads were generated. Microbiome bioinformatics were performed with QIIME2 2019.4 [33] with slight modification according to the official tutorials (https://docs.qiime2.org/2019.4/tutorials/ (accessed on 30 September 2022)). Briefly, raw data FASTQ files were imported into the format that could be operated by the QIIME2 system using the qiime tools import program. The DADA2 [34] process is to obtain amplified variant sequences through de-duplication. In the process, clustering is not carried out based on similarity, but only de-duplication is carried out. Demultiplexed sequences from each sample were quality filtered and trimmed, de-noised, merged, and then the chimeric sequences were identified and removed using the QIIME2 DADA2 plugin to obtain the feature table of amplicon sequence variants (ASV) [34]. The QIIME2 feature-classifier plugin was then used to align ASV sequences to a pre-trained GREENGENES 13_8 $99\%$ database (trimmed to the V4V5 around a 420bp region bound by the 515F/907R primer pair) to generate the taxonomy table [35]. In order to unify the sequence effort, samples were rarefied at a depth of 25,318 sequences per sample before alpha and beta diversity analysis. Rarefaction allows one to randomly select a similar number of sequences from each sample to reach a unified depth. ## 2.4. Bioinformatics and Statistical Analyses Sequence data analyses were mainly performed using QIIME2 and R software (v3.2.0). ASV-level alpha diversity indices, such as the Chao1 richness estimator and Pielou’s evenness, were calculated using the ASV table in QIIME2 [36,37], and visualized as box plots (R software, package “ggplot2”). Beta diversity analysis was performed to investigate the structural variation of microbial communities across samples using weighted or unweighted UniFrac distance metrics [38,39] and visualized via principal coordinate analysis (PCoA) (R software, package “ape”). The significance of differentiation of microbiota structure among groups was assessed by PERMANOVA (permutational multivariate analysis of variance) [40]. Random forest analysis (R software, package “randomForest”) was applied to sort the importance of microbiota with differences in abundance between groups and screen the most critical phyla and genera that lead to microbial structural differences between groups using QIIME2 with default settings [41,42]. Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (Picrust2) [43] is software that predicts the functional abundance from the sequencing data of marker genes (typically 16S rRNA). An ASV’s abundance table is used for standardization, and the corresponding relationship of each ASV is compared with the Kyoto Encyclopedia of Genes and Genomes (KEGG) library to obtain the functional information and functional abundance spectrum. ## 3.1. Identification of Individuals and Sex A total of 22 red deer individuals were identified from 43 fecal samples, including 12 males and 10 females (Supplementary Table S2). The female captive deer were CF1, CF2, CF3, CF4, CF5, CF6, CF7, and CF8. The male captive deer were CM1, CM2, and CM3. We divided all the red deer (22 wild and 11 captive) into four groups: wild females (WF) ($$n = 10$$), wild males (WM) ($$n = 12$$), captive females (CF) ($$n = 8$$), and captive males (CM) ($$n = 3$$). The information about identification, location, sex, and diet is summarized in Supplementary Table S2. ## 3.2. Diet Composition and Nutritional Composition of Wild and Captive Red Deer Winter Diets The wild red deer were fed on 16 species of plants in the winter. The edible plants belonged to 16 species of 16 genera and 9 families. Since the frequency of occurrence of other edible plants in red deer, such as Mongolian oak (Quercus mongolica) and Chinese maple (Acer sinensis), was less than $7\%$, the nutrient content of these plants was not measured. In addition, we hypothesized that they had little influence on the nutritional strategy of red deer. Therefore, the primary nutrient contents of 14 types of edible plants were determined. The food and nutritional composition of wild red deer are shown in Supplementary Table S3. When the captive red deer were fed, each type of food was fed separately at different times. The nutritional content of the primary food of captive red deer from the farm (adjacent to the Gaogestai Nature Reserve) in winter is shown in Supplementary Table S4. Only one kind of diet were provided to captive deer at each feeding time with all captive deer feeding together. Captive red deer feed on leaves and high protein given by artificial feeding. Compared with captive red deer, wild deer have a wider feeding range and no dietary limitations. Substantial differences exist between these two feeding methods. ## 3.3. Sequencing Analysis and Clustering A total of 1,561,654 high-quality sequences were obtained from the fresh winter feces of 22 wild deer and 11 captive deer. Rarefaction curves based on the Chao1 diversity index reached asymptotes at 22,500. The results showed that with the increase in amount of sequencing, the curve tended to be flat and no longer changed, indicating that the amount of sequencing in this study basically reflected the diversity of red deer fecal microbiota in this study (Supplementary Figure S1). A total of 15,228 ASVs were obtained using a $100\%$ similarity clustering method. The WF, WM, CF, and CM groups included 3056 ASVs, 3924 ASVs, 6661 ASVs, and 1587 ASVs, respectively. ## 3.4. Microbial Composition and Diversity by Environment and Sex We found significant differences in fecal microbial composition between wild and captive red deer based on environment. The fecal microbial communities of four groups (WF, WM, CF, and CM) were dominated by the phyla Firmicutes and Bacteroidetes (Figure 1A). The phylum Firmicutes was the most abundant in WF (81.12 ± $2.87\%$), followed by WM (79.03 ± $2.19\%$), CF (58.24 ± $3.17\%$), and CM (59.66 ± $0.47\%$). Secondly, Bacteroidetes was abundant in WF (15.19 ± 2.09), WM (16.89 ± $2.08\%$), CF (33.02 ± 5.48), and CM (31.55 ± $1.61\%$). At the genus level, the genera from the four groups with abundance > $1\%$ were Oscillospira, a candidate genus 5-7N15 from the family Bacteroidaceae, Ruminococcus, Roseburia, Clostridium, and Prevotella (Figure 1B and Table 1). The chao1 diversity indices demonstrate a significant difference between the WF and WM groups ($p \leq 0.01$). There was no statistically significant difference between the CF and CM groups ($p \leq 0.05$). Pieluo’s diversity index showed that no significant differences occurred between WF and WM groups ($p \leq 0.05$) or CF and CM groups ($p \leq 0.05$) (Figure 2). Wild and captive red deer also differed in beta-diversity. An PCoA plot based on the Unweighted Unifrac and Weighted Unifrac distance matrix revealed clear separation of the fecal microbiota between wild and captive red deer (Figure 3A). The results of a PCoA analysis showed that the fecal microbial structures of the CF and CM groups were more similar than those of the WF and WM communities ($F = 13.82$, $$p \leq 0.001$$; and unweighted: $F = 5.983939$, $$p \leq 0.001$$; Figure 3A; Supplementary Table S5). A random forest analysis showed that Firmicutes and Bacteroidetes were the primary microorganisms that had differences between the wild and captive populations by (an importance > 0.1) (Figure 3C, D). This analysis indicated that there were significant differences in the abundances of Firmicute and Bacteroidetes between the four groups (an importance > 0.1), which were the primary phyla that caused differences in the microbial communities between groups (Figure 3C). Ruminococcus, Treponema, Akkermansia, a candidate genus 5-7N15 belonging to family Bacteroidaceae, and a candidate genus rc4-4 belonging to family Peptococcaceae were the main genera that caused differences in microbial communities between sex and environment (importance > 0.04; Figure 3D). ## 3.5. Functional Modules of Fecal Microbial Communities Metabolism was found to be the most common function prediction performed on fecal microbial communities and included the most important pathways for microbial clustering ($76.67\%$). The second pathway of metabolism included amino acid metabolism ($17.26\%$), carbohydrate metabolism ($17.85\%$), metabolism of cofactors and vitamins ($16.57\%$), and metabolism of terpenoids and polyketides ($12.66\%$) (Figure 4A). A PCoA analysis showed that the WF and WM groups had more similar microbial function clusters (Figure 4B). It was found that there were significant differences in the three metabolic pathways of glycan biosynthesis and metabolism (GBM), energy metabolism (EM), and metabolism of other amino acids (MAA) ($p \leq 0.05$) (Figure 5). ## 4. Discussion This is the first study to apply high-throughput sequencing to describe the fecal bacterial microbiota of wild and captive red deer by sex. Analysis of the differences in fecal microbiota is a key step in releasing captive red deer to help expand the wild population. *In* general, the fecal bacterial microbiota of red deer was similar to that of other cervidae, such as elk (Cervus canadensis), white tailed deer (Odocoileus virginianus) [38], and white-lipped deer (Cervus albirostris) [39], at least at the bacterial phylum level, with high proportions of the phyla Firmicutes and Bacteroidetes. In the digestive tract of herbivores, the role of *Firmicutes is* mainly to decompose cellulose and convert it into volatile fatty acids, thereby promoting food digestion and host growth and development. The enrichment of Firmicutes plays an important role in promoting the ability of red deer to obtain abundant nutrients from food and, at the same time, affects the metabolic function of the fecal microbiota. Bacteroidetes can improve the metabolism of organisms, promote the development of the gastrointestinal immune system, participate in the body’s bile acid, protein, and fat metabolisms, and also have a certain regulatory effect on carbohydrate metabolism. It can also produce special glycans and polysaccharides, which have a strong inhibitory effect on inflammation [43]. Differences in microbiota may be explained by changes in diet. Data from previous local and overseas studies have shown that diet is the main factor affecting the gut microbiota in mammals [40]. It is likely that wild deer have a more varied diet, more than captive deer. These phyla, Firmicutes and Bacteroidetes, are involved in important processes such as food digestion, nutrient regulation and absorption, energy metabolism, and host intestinal defense against foreign pathogens [40,41,42]. Alpha diversity alterations may be attributed to differential diet or hormonal influences on the gut microbiota. Fecal microbiota richness in wild populations is higher than that in captive animals, such as the Tibetan wild ass (Equus kiang), bharal (Pseudois nayaur), Tibetan sheep (Ovis arise), and yak (Bos mutus) [44,45,46,47,48]. Nevertheless, other studies also found that captivity might increase the alpha diversity of fecal microbiota in most Cervidae compared with other animals, for example, sika deer (Genus Cervus), Père David’s (Elaphurus davidianus), and white-tailed deer (Odocoileus virginianus) [49,50]. It may be that some environmental stresses in the wild or the special structure of the stomach and intestines in these deer lead to decreased alpha diversity of fecal microbiota in wild deer [50]. This phenomenon needs further research to determine its cause. Our results showed that the richness of the fecal microbial community in wild red deer differed by sex (Figure 2). In wild deer, the microbiota diversity was higher for females than males. Microbial community alterations by sex could be attributed to hormonal [51]. The sampling time was during the gestation period of red deer. Levels of female growth hormone during pregnancy may affect the fecal microbiota. Reproductive hormones have also been associated with sex and gut microbial changes in wild animals [17,52,53]. Increased evidence indicates that sex steroid hormone levels are associated with the human gut microbiota [54,55]. Futher, Edwards et al. reported that estrogen and progesterone had an impact on gut function [56]. The captive deer also had the smallest sample size ($$n = 3$$ males and 8 females), which limited our ability to detect these differences. In this study, the functional pathway composition of wild red deer is more similar (Figure 5B), which is completely opposite to the microbial structure (Figure 3A). The change in microbial structure does not necessarily lead to the change in function, which may be due to the same function in different microbial communities [57]. In recent years, studies have shown that gut microbiota are involved in various metabolic processes such as amino acids, carbohydrates, and energy, confirming their primary role in assisting host digestion and absorption [58]. It has also been found to be involved in environmental information processing, suggesting that the gut microbiota plays an important role in facilitating acclimation to changing environments [59]. The metabolism of gut microbiota is closely related to the feeding habits of the host. In the long-term evolution process, the gut microbiota will respond to changes in diet types or specific diets by adjusting the content of certain digestive enzymes [4,60]. Studies have shown that the decrease of fecal microbial diversity can lead to a reduction in the functional microbiota, in the efficiency of the microbiota, and in the resistance to pathogen invasion [61]. The decrease in fecal microbial diversity in captive populations resulted in a decrease in functional microbiota [61]. Ruminococcaceae and Lachnospiraceae are two of the most common bacterial families within the *Firmicutes phylum* [62]. It has been hypothesized that they have an important role as active plant degraders [63,64]. According to our results, the level of Ruminococcaceae in the captive groups is significantly lower than that in the wild group, which could suggest that the fiber-reduced diet in captivity is modifying the ability of the fecal microbiota to degrade recalcitrant substrates such as cellulose, hemicellulose, and lignocellulose, among others, that are commonly found on the main resources of the wild red deer diet. The captive deer’s consequent reduction of diet resources might trigger the decline of important metabolic pathways associated with nutrient use [64]. 16S rRNA analysis constitutes a valuable and cost-efficient approach for surveillance and monitoring wild populations as well as captive individuals. Picrust2 prediction accuracy is dependent on the availability of closely related annotated bacterial genomes in the database and the phylogenetic distance from the reference genome. However, the prediction results are still uncertain, which does not mean that the correlation between the predicted genes and the real metagenome of the microbiota is $100\%$ [65]. At present, due to the difficulty of cultivation, the mechanism by which some functional bacteria exert their effects remain unclear. Therefore, in the follow-up work, it is necessary to repeatedly cultivate the conditions of some intestinal anaerobic bacteria, the most extensive of which are Firmicutes and some Bacteroidetes. The microbiota was cultured in vitro by simulating the gut environment, and its functions were speculated and further verified in combination with multiple groups of studies (metagenomics, meta transcriptome, and proteome, etc.). At the same time, the unknown functional microbiota and its genome sequence information can be explored and studied. These works will help to understand the metabolic activities of the complex microbiota and further explore the host physiological processes involved in gut microbiota. ## 5. Conclusions In conclusion, our study provided information on the structure and function of the fecal microbiome of red deer through the 16S rRNA gene of fecal samples. Comparing analyses identified significant variations of fecal microbiota composition and functions between captive and wild populations and also indicated that environment and sex have a great influence on these variations. These findings were of great significance for the reintroduction of captive red deer, given that the differences in fecal microbiota composition and functions between captive and wild red deer would greatly impact the ability of captive red deer to adapt to the wild environment. For further study, incorporating novel methods (e.g., transcriptome) to study the functional annotation of gene content and the functional traits of the host would be essential for better understanding the physiology and immunology of red deer.
# Rumen-Protected Lysine and Methionine Supplementation Reduced Protein Requirement of Holstein Bulls by Altering Nitrogen Metabolism in Liver ## Abstract ### Simple Summary Excessive protein intake causes dietary nitrogen to be excreted through urine nitrogen and fecal nitrogen, reducing nitrogen use efficiency. The main way to reduce dietary nitrogen loss is to reduce dietary protein content, as well as to meet the nutritional needs of ruminants. Therefore, reducing crude proteins while adding rumen amino acids can achieve a reduction in nitrogen emissions. The results showed that adding RPLys (55 g/d) and RPMet (9 g/d) to the bull diet and low protein diet ($11\%$) could improve the growth performance, increase the level of nitrogen metabolism, and enhance the expression of genes related to nitrogen metabolism. ### Abstract The aim of this study was to investigate the effect of low-protein diets supplemented with rumen-protected lysine (RPLys) and methionine (RPMet) on growth performance, rumen fermentation, blood biochemical parameters, nitrogen metabolism, and gene expression related to N metabolism in the liver of Holstein bulls. Thirty-six healthy and disease-free Holstein bulls with a similar body weight (BW) (424 ± 15 kg, 13 months old) were selected. According to their BW, they were randomly divided into three groups with 12 bulls in each group in a completely randomized design. The control group (D1) was fed with a high-protein basal diet (CP$13\%$), while bulls in two low-protein groups were supplied a diet with $11\%$ crude protein and RPLys 34 g/d·head + RPMet 2 g/d·head (low protein with low RPAA, T2) or RPLys 55 g/d·head + RPMet 9 g/d·head (low protein with high RPAA, T3). At the end of the experiment, the feces and urine of dairy bulls were collected for three consecutive days. Blood and rumen fluid were collected before morning feeding, and liver samples were collected after slaughtering. The results showed that the average daily gain (ADG) of bulls in the T3 group was higher than those in D1 ($p \leq 0.05$). Compared with D1, a significantly higher nitrogen utilization rate ($p \leq 0.05$) and serum IGF-1 content ($p \leq 0.05$) were observed in both T2 and T3 groups; however, blood urea nitrogen (BUN) content was significantly lower in the T2 and T3 groups ($p \leq 0.05$). The content of acetic acid in the rumen of the T3 group was significantly higher than that of the D1 group. No significant differences were observed among the different groups ($p \leq 0.05$) in relation to the alpha diversity. Compared with D1, the relative abundance of Christensenellaceae_R-7_group in T3 was higher ($p \leq 0.05$), while that of Prevotellaceae _YAB2003_group and Succinivibrio were lower ($p \leq 0.05$). Compared with D1 and T2 group, the T3 group showed an expression of messenger ribonucleic acid (mRNA) that is associated with (CPS-1, ASS1, OTC, ARG) and (N-AGS, S6K1, eIF4B, mTORC1) in liver; moreover, the T3 group was significantly enhanced ($p \leq 0.05$). Overall, our results indicated that low dietary protein ($11\%$) levels added with RPAA (RPLys 55 g/d +RPMet 9 g/d) can benefit the growth performance of Holstein bulls by reducing nitrogen excretion and enhancing nitrogen efficiency in the liver. ## 1. Introduction Protein, as typically the most expensive macronutrient of diets, plays critical roles in the health, growth, production, and reproduction of animals. However, protein ingredient shortages and nitrogen pollution challenge the livestock farming worldwide, albeit these problems have been alleviated in recent decades due to an increase in demand for animal source food from a fast-growing population with rising incomes [1,2]. Therefore, enhancing the utilization efficiency of dietary protein and reducing excretory losses would be alternative strategies to solve these problems [3]. Low-protein diets have been proven to enhance nitrogen utilization [4,5]. However, restricting N intake also sacrificed the growth performance and productivity of animals [6,7], which has been attributed to limiting amino acid deficiency in low-protein diets [8]. Lysine (Lys) and methionine (Met) are the top two limiting amino acids (LAA) for ruminants [9,10]. Adding rumen-protected Lys and Met in low-protein diets was considered an efficient way to the meet animal amino acids requirement, as they could escape from rumen degradation and increase the supply of amino acids to the intestines, thus improving the N utilization [11]. Incorporating rumen-protected Lys and (or) Met into low-protein diets was reported to increase dry matter intake in transition cows [12,13]. Previous studies also suggested that rumen-protected Lys and (or) Met in low-protein diets promoted milk protein yield in high-producing dairy cows [14,15] and maintained milk production and milk protein yield while reducing the N losses in urine in dairy cows [16]. The question of how to reduce nitrogen emissions of ruminants without affecting their production performance has always been the focus of scholars, and the research in this area has mostly been focused on dairy cows; however, there have been few studies conducted on Holstein bulls. Nitrogen recycling contributes to effective N utilization in ruminants [17], and ruminal microbiota and the liver play important roles in this nitrogen metabolism [4]. Therefore, the aim of this study was to investigate the effect of low-protein diets supplemented with rumen-protected lysine (RPLys) and methionine (RPMet) on growth performance, rumen fermentation, blood biochemical parameters, nitrogen metabolism, and gene expression related to N metabolism in the livers of Holstein bulls. ## 2. Materials and Methods This study was conducted between March 2016 and June 2016 at Hongda an animal husbandry in Baoding, P. R. China. The experimental protocol (YXG 1711) was approved by the Institutional Animal Care and Use Committee of Hebei Agricultural University. ## 2.1. Animals, Experimental Design, and Diets Thirty-six healthy and disease-free Holstein bulls with a similar body weight (BW; 424 ± 15 kg, aged 14 months old) were selected. According to their BW, they were randomly divided into 3 groups with 12 bulls in each group in a completely randomized design. The control group (D1) was fed with a high-protein basal diet (CP$13\%$), while bulls in two low protein groups were supplied diet with $11\%$ crude protein and RPLys 34 g/d·head + RPMet 2 g/d·head (low protein with low RPAA, T2) or RPLys 55 g/d·head + RPMet 9 g/d·head (low protein with high RPAA, T3). Basic diets were prepared according to Japanese feeding standard [2008] for beef cattle [18] (Table 1). The RPAA (Hangzhou Kangdequan Feed Limited Company, Hangzhou, Zhejiang, China) feed was used with a rumen protection rate of $60.0\%$ and was premixed with 100 g of grounded corn which, was used as a carrier for the supplement and was the same amount of grounded corn as that supplied to bulls in the D1 group. All animals were fed ad libitum the basic diets and with free access to clean water. All the experimental animals were housed in tie stalls according to the groups and were fed twice daily at 06:00 and 18:00 h following the removal of the feed refusals before morning feeding. The experiment consisted of 3 periods: a 14-day adaptation period, a 2-month feeding period, and a 7-day sample collection period. Holstein bulls were weighted before morning feeding at the beginning and end of every feeding period. ## 2.2. Sample Collection The diet offered and refused for individual bulls was weighed every day throughout the trial to average daily dry matter intake (ADMI). Samples of individual feed ingredients, orts, and diets were collected weekly during the experimental period and stored at −20 °C [19]. At the beginning of the experiment, all Holstein bulls were weighed before feeding in the morning to obtain their initial weight. Similarly, at the end of the trial, all Holstein bulls were weighed before morning feeding to obtain the final weight, and the average daily gain (ADG) was calculated as (final weight–initial weight)/test days. Based on the ADMI and ADG, the feed weight ratio (F/G) was calculated. At the end of the feeding period, four Holstein bulls in each group were randomly selected, and a 10-mL blood sample was collected via jugular venipuncture from each bull before morning feeding. The samples were immediately centrifuged at 3000 rpm for 15 min, and the serum samples were collected and stored at −20 °C for further analysis. After 2 h of morning feeding at the end of the feeding period, the ruminal fluid samples of four bulls were collected via an oral stomach tube equipped with a vacuum pump. We discarded the first 100 to 200 mL of fluid collected to reduce the chance that the stomach tube rumen samples were contaminated with saliva. Once again, approximately 200 mL of rumen fluid was collected, and about 20 mL was taken, filtrated with four layers of sterile cheesecloth, and then transferred to 2-mL sterile tubes and stored in liquid nitrogen for further analysis. Three bulls in each group were randomly selected and euthanized at the end of the feeding experiment after 2 h of morning feeding. The middle part of liver tissue was immediately collected after animal sacrifice and cut into 5-mm fragments; the tissue sample was then placed into sterile tubes and stored in liquid nitrogen for further analysis. Another three bulls in each group were randomly selected after the feeding period and were transferred to metabolic cages. After a 5-day adaption period, feces and urine were collected during the next 3 days. Total feces and urine were respectively collected daily before morning feeding. The feces of each bull were weighted, mixed, subsampled (100 g/kg), and stored at −20 °C. Each bull fecal sample was evenly divided into two parts, one with $10\%$ (10:1) sulfuric acid solution and the other without acid, before being dried, crushed, sifted, and stored at room temperature for the determination of nutrient content. The urine of each bull was collected using a plastic container with 10 mL of $10\%$ sulfuric acid to prevent the loss of ammonia; then, after the volume was measured, the urine was filtered with four layers of gauze filter, and subsamples (100 mL/individual) were stored at −20 °C for urine nitrogen measurement. ## 2.3. Laboratory Analysis Offered and refused feed and feces were dried at 55 °C for 48 h, ground to pass through a 1-mm screen (Wiley mill, Arthur H. Thomas, Philadelphia, PA, USA), and stored at 4 °C for analysis of chemical composition. The dry matter (DM, method 934.01), ash (method 938.08), crude protein (CP, method 954.01), ether extract (EE, method 920.39), Ca (method 927.02), and P (method 965.17) contents of the samples were determined according to the procedures of the AOAC [20], and NDF (amylase) and ADF content was analyzed using the methods of Van Soest et al. [ 21]. Lysine and methionine content in the feed was analyzed using an automatic AA analyzer (Hitachi 835, Tokyo, Japan). Serum alanine transferase (ALT), aspartate transferase (AST), albumin (ALB), total protein (TP), glucose (GLU), and blood urea nitrogen (BUN) were analyzed using an automatic biochemical analyzer (Hitachi 7020, Tokyo, Japan). Serum growth hormone (GH) and insulin-like growth factor-1 (IGF-1) contents were measured with enzyme-linked immunosorbent assay (ELISA) kits according to the manufacturer’s specifications (HZ Bio. CO., Shanghai, China). The pH value of the rumen fluid was measured immediately by using a digital pH analyzer (PHS-3C, Shanghai, China), and ammonia nitrogen (NH3-N) and microbial protein (MCP) were determined following recommendations provided in previous studies [22]. Volatile fatty acid (VFA) concentrations in rumen fluid were analyzed using gas chromatography (TP-2060F, Tianpu. Co., Ltd., Beijing, China). The DNA in rumen fluid was extracted using the CTAB method using a commercial kit (Omega Bio-Tek, Norcross, GA, USA), and, after DNA was purified with $1\%$ agarose gel electrophoresis, the library was constructed using a TruSeq® DNA PCR-Free Sample Preparation Kit (Illumina, Inc., San Diego, CA, USA). Then, the constructed library was quantified using HiSeq2500 PE250 (Illumina, Inc., San Diego, CA, USA). Sequences data were analyzed using the QIIME2 pipeline according to a previous study [23] and submitted to NCBI with project ID P2016030502-S2-3-1. The primer of target genes (Table 2) was designed according to the bovine gene sequences reported in NCBI and synthesized by the Shanghai Biotedchnology Technology Corporation Limited Company. The total amount of ribonucleic acid (RNA) was extracted from the liver tissue of Holstein bulls with a miRNeasy kit (Qiagen, Hilden, Germany); then, RNA quality was determined using NanoDrop 2000 (NanoDrop Tec, Rockland, DE) with OD260/OD280 ranging between 1.9 and 2.1. Real-time polymerase chain reaction (PCR) was performed to quantify the expression of target genes, using an SYBR Green PCR Master mix (Takara bio-Co., Shiga, Japan) and following the manufacturer’s protocols. *The* gene expression of liver tissue was calculated using the method of 2-ΔΔCt, where the expression of ACTB was used as referenced D1. ## 2.4. Statistical Analysis The data management was performed using a spreadsheet program with Excel, and statistical analysis was carried out using R software (version 3.6.3, R Foundation for Statistical Computing, Vienna, Austria.) with a one-way analysis of variance (ANOVA) model: Y = α + Xi + ei, where Y is the observed parameters, α is the overall mean, *Xi is* the ith treatment effect, and ei is the residual error. All data were shown using least squares means, and significant differences among treatments were declared at $p \leq 0.05$ and a tendency if 0.05 < p ≤ 0.10. ## 3.1. Growth Performance There was no significant difference ($p \leq 0.05$) in ADG, ADMI, and F/G among different groups; however, the F/G in the T2 and T3 groups decreased by $8.45\%$ and $6.67\%$, respectively, compared with D1 (Table 3). ## 3.2. Nitrogen Metabolism Compared with the D1 group, the intake of nitrogen and the amount of nitrogen excretion by feces and urine were significantly lower in the T2 and T3 groups ($p \leq 0.05$). The ratio of nitrogen excretion by feces and nitrogen intake (FN/IN) was lower in T3 compared with the D1 and T2 groups, while the ratio of nitrogen excretion by urine and nitrogen intake (UN/IN) was lower in the T2 and T3 groups compared to the D1 group. Thus, a significantly higher nitrogen utilization rate was observed in both T2 and T3 groups compared with the D1 group ($p \leq 0.05$; Table 4). ## 3.3. Serum Biochemical Index Low-protein diet with RPAA supplementation had no effect on concentrations of ALT, AST, ALB, TP, GLU, and GH in serum ($p \leq 0.05$). Concentration of serum BUN significantly decreased; however, the concentration of serum IGF-1 significantly increased in the T3 group compared with the D1 group ($p \leq 0.05$; Table 5). ## 3.4. Rumen Fermentation No significant difference was detected in the rumen pH, concentration of NH3-N, MCP, propionate, and butyrate, and in the ratio of acetate/propionate among different groups ($p \leq 0.05$). The concentration of acetate in the T3 group was significantly higher than that in D1 and T2 ($p \leq 0.05$; Table 6). ## 3.5. Rumen Microbiota No significant difference was observed in alpha diversity among the different groups ($p \leq 0.05$; Table 7). The relative abundance of the highest 16 abundant bacteria at the genus level was compared among the different groups. However, the relative abundance of Ruminococcaceae_NK4A214 in the T3 group was lower than that in the D1 group ($p \leq 0.05$), and the abundance of Christensenellaceae_R-7_group in the T3 group was lower than that in both D1 and T2 groups ($p \leq 0.05$). Meanwhile, the relative abundance of Prevotellaceae_YAB2003_group in T3 was higher than that in the D1 group ($p \leq 0.05$), and the relative abundance of Succinivibrio in T3 was higher than that in both the D1 and T2 groups ($p \leq 0.05$; Table 8). ## 3.6. Gene Expression in Liver Tissue The expression of the CPS-1, ASS, ARG, OTC, and N-AGS genes, which relate to nitrogen metabolism or urea metabolism in liver tissue, are shown in Figure 1. The expression of CPS-1, ARG, and N-AGS was significantly upregulated in the T3 group ($p \leq 0.05$), although no significant difference was observed between the rT2 and D1 groups ($p \leq 0.05$). The expression of CPS-1, ARG, and N-AGS increased by $25\%$, $18\%$, and $13\%$ in the T2 group compared with D1. The expression of ASS and OTC was upregulated in both the T2 and T3 groups compared with D1 ($p \leq 0.05$). The expression of the SLC3A2, IRS1, PDK, P13K, TSC1, TSC2, mTORC1, eIF4EBP1, S6K1, and eIF4B genes, which are related to the nitrogen metabolism in liver tissue, are shown in Figure 2. The low-protein diet with RPAA supplementation did not affect gene expression of SLC3A2, P13K, TSC2, and eIF4EBP1 ($p \leq 0.05$); however, the expression of IRS1, PDK, S6K1, and eIF4B genes in liver tissue increased significantly ($p \leq 0.05$), and the expression of the mTORC1 gene also increased ($$p \leq 0.09$$), while the expression of TSC1 gene decreased significantly ($p \leq 0.05$). ## 4. Discussion Protein is one major factor that affects the health, growth, and production of ruminants. Moreover, although people tend to formulate high-protein diets to achieve a better production of ruminants, the global protein shortage is increasing [1], and high-protein diets overload the environment by increasing nitrogen (N) excretion through urine and feces [3], which is harmful for the sustainability of the livestock industry. By providing bulls with a low-protein diet ($11\%$ CP) supplemented with rumen-protected lysine and methionine, our findings indicate that, compared with a high-protein diet ($13\%$ CP) group which followed the recommended Japanese feeding standard for beef cattle [18], our low-protein diet supplemented with RPAA increased ADG and N utilization and decreased N excretion through urine and feces. These findings were comparable with previous studies in which the feeding of rumen-protected Lys and (or) Met to castrated cattle increased daily gain [24] and reduced urinary nitrogen and urea nitrogen in urine [25]. The World Health Organization (WHO) proved that the addition of RPAA to a low-protein diet increases N utilization, reduces N emission and environmental pollution, and promotes the growth performance of dairy cows [12,14]. Blood biochemical parameters are sensitive to animal health and nutrient condition [26,27]. The serum content of ALT, AST, ALB, TP, GLU, BUN, GH, and IGF-1 was used to assess the nutrient condition of bulls with different treatment groups. From this, we observed that BUN content decreased, and IGF-1 content increased, in bulls provided with a low-protein diet supplemented with RPAA, while other indexes were not affected. The serum BUN content reflects the nitrogen balance of ruminants and negatively correlated with N utilization [17]. When ruminants were provided with low-dietary protein with a higher N utilization, serum BUN decreased [4,28]. The main function of IGF-1 relates to the inhibiting of protein degradation and the promoting of protein synthesis to maintain nitrogen balance and to improve the growth performance of animals [29,30]. These observations further explained the improvement in N utilization and growth performance of bulls on a low-protein diet supplemented with RPAA. When cattle are fed with low-protein diets, urea N recycling can be considered a high-priority metabolic function because a continuous N supply for microbial growth in the rumen is a strategy for animal survival [31]. The abundance of the microflora reflects its ability to adapt to a particular environment and compete for available nutrients; moreover, it indicates its importance to the overall function of the microbiome as a whole [32]. The ACE (reflecting the richness of bacteria in the sample), Shannon, and PD-whole-tree (reflecting the microbial diversity in feces) indexes were used to assess the alpha diversity of rumen microbiota. Previous studies have demonstrated that rumen fermentation and microbiota are sensitive to protein levels [33,34] or feed ingredients [35] in ruminants, which were also sensitive biomarkers of N utilization [36]. By monitoring the rumen fermentation and microbiota, we observed an increase in the acetate content of rumen; however, other parameters including NH3-N and MCP content were not significant affected, which is similar to the results of a study by Martin et al. [ 37]. The addition of methionine analogue 2-hydroxy-4-methylthiobutyric acid (HMB) and esterified 2-hydroxy-4-methylthiobutyric acid (HMBi) to the diet of dairy cows significantly increased the content of rumen total volatile fatty acids (TVFAs) [37]. Some studies have shown that methionine hydroxy analogue (MHA) can increase the ratio of acetic acid and butyric acid in rumen content [38]. Research has showed that $0.52\%$ of methionine could increase the content of butyric acid in rumen, while $0.26\%$ methionine did not affect the content of VFA [39]. The above results show that the effect of methionine on rumen VFA content is unpredictable. The alpha diversity of microbiota in rumen was not affected by treatment, and only a small portion of bacteria at the genus level (~$5\%$ in abundance) was determined to be significantly different between groups with a decreased relative abundance of Ruminococcaceae_NK4A214_group and Christensenellaceae_R-7_group and increased Prevotellaceae_YAB2003_group and Succinivibrio in bulls on a low-protein diet supplemented with RPAA. These findings hinted that bulls on a low-protein diet supplemented with RPAA would maintain the rumen fermentation and maintain ruminal microbiota homeostasis compared with that from D1. The liver plays important roles in the utilization efficiency of recycled N. The excess nitrogen in the rumen is usually inhaled into the animal’s blood in the form of ammonia, which is then metabolized by the liver to synthesize urea. All the urea synthesized by the liver, some of which is secreted via saliva into the rumen and intestines of animals, are reused by bacteria, protozoa, and other microorganisms; the other part is filtered by the kidneys and excreted with the urine [28]. The urea cycle plays a key role in maintaining a positive balance of nitrogen in anima, especially at low dietary nitrogen levels. S6K1 and eIF4EBP1 are genes that regulate protein translation downstream of mTORC1. The S6K1 gene can promote protein translation by stimulating the phosphorylation of downstream eIF-4B, RPS6, eIF-2, and PAPB [40], and the SLC3A2, IRS1, PDK, P13K, TSC1, TSC2, mTORC1, eIF4EBP1, S6K1, and eIF4B genes are related to nitrogen metabolism in the liver; moreover, these genes would become overexpressed when blood ammonia increased to increase urea synthesis and balance the blood ammonia [41]. However, unexpected results were observed in the current experiment: when feeding bulls with a low-protein diet supplemented with RPAA, we observed that the serum BUN decreased but the expression of genes associated with urea synthesis in liver increased. This finding can explain why the low-protein diet supplemented with RPAA induced an increase in N efficiency; however, the mechanism behind these upregulated genes in the liver was unclear. Previous studies have demonstrated that AA in diets not only provide animal nutrition but also act as a functional regulator and have ability to stimulate expression altering in multiple tissue cells such as mammary tissue [42], polymorphonuclear cells [43], and adipose tissue [44], as well as liver tissue [45,46]. The influence of RPLys and RPMet on liver genes’ expression requires further study. As the number of samples selected in this study is limited, it is necessary to further test the current data in the future research. ## 5. Conclusions In summary, providing low dietary protein ($11\%$) with RPLys (55 g/d) and RPMet (9 g/d) to bulls could increase their nitrogen utilization rate, serum IGF-1 content, ruminal acetate content, and expression genes associated with urine metabolism and nitrogen metabolism in liver compared to that with high protein ($13\%$). Our findings indicate that providing a low-protein diet supplemented with RPAA could benefit bulls mainly by increasing liver nitrogen metabolism and utilization; however, the RPAA’s affecting of liver gene expression at a nutrition level or as a signal molecule still requires further study.
# Epithelial-to-Mesenchymal Transition and Phenotypic Marker Evaluation in Human, Canine, and Feline Mammary Gland Tumors ## Abstract ### Simple Summary In this study we addressed the analysis of human breast cancer and canine and feline mammary tumors with regard to the expression, at either gene or protein level, of some molecules that are related to the capacity of an epithelial cell to become mesenchymal (epithelial-to-mesenchymal transition), acquiring higher ability to metastasize. In our samples, some typical markers of this transition were not higher at mRNA levels in tumors than in healthy tissues, indicating that some other markers should be investigated. Instead, at protein levels, some molecules such as vimentin and E-cadherin were indeed associated with higher aggressiveness, being potential useful markers. As already described in the literature, we also demonstrated that feline mammary tumors are close to an aggressive subtype of human breast cancer called triple negative, whereas canine mammary tumors are more similar to the less aggressive subtype of human breast cancer that expresses hormonal receptors. ### Abstract Epithelial-to-mesenchymal transition (EMT) is a process by which epithelial cells acquire mesenchymal properties. EMT has been closely associated with cancer cell aggressiveness. The aim of this study was to evaluate the mRNA and protein expression of EMT-associated markers in mammary tumors of humans (HBC), dogs (CMT), and cats (FMT). Real-time qPCR for SNAIL, TWIST, and ZEB, and immunohistochemistry for E-cadherin, vimentin, CD44, estrogen receptor (ER), progesterone receptor (PR), ERBB2, Ki-67, cytokeratin (CK) $\frac{8}{18}$, CK$\frac{5}{6}$, and CK14 were performed. Overall, SNAIL, TWIST, and ZEB mRNA was lower in tumors than in healthy tissues. Vimentin was higher in triple-negative HBC (TNBC) and FMTs than in ER+ HBC and CMTs ($p \leq 0.001$). Membranous E-cadherin was higher in ER+ than in TNBCs ($p \leq 0.001$), whereas cytoplasmic E-cadherin was higher in TNBCs when compared with ER+ HBC ($p \leq 0.001$). A negative correlation between membranous and cytoplasmic E-cadherin was found in all three species. Ki-67 was higher in FMTs than in CMTs ($p \leq 0.001$), whereas CD44 was higher in CMTs than in FMTs ($p \leq 0.001$). These results confirmed a potential role of some markers as indicators of EMT, and suggested similarities between ER+ HBC and CMTs, and between TNBC and FMTs. ## 1. Introduction Mammary gland cancer is the most common tumor in women [1] and in female dogs [2], and the third most common neoplasia in cats [3]. Human breast cancer (HBC) is classified into four main subtypes according to the expression of estrogen receptor (ER), progesterone receptor (PR), and epidermal growth factor receptor ERBB2, as follows: (i) Luminal A tumors (ER+ and/or PR+, ERBB2-); (ii) Luminal B tumors (ER+ and/or PR+, ERBB2+); (iii) ERBB2-overexpressing tumors (ER-, PR-, ERBB2+); and (iv) triple-negative (ER-, PR-, ERBB2-) breast cancer (TNBC) [4]. TNBCs are typically high-grade carcinomas characterized by an aggressive behavior and a poor prognosis, with high risk of distant metastasis and death [5]. Canine mammary tumors (CMTs) are classified based on morphologic features [6]. Fifty per cent of CMTs are malignant with a $20\%$ risk of metastasis [7]. The majority (80–$90\%$) of feline mammary tumors (FMTs) are characterized by a highly aggressive behavior that leads to rapid progression and distant metastasis development [8,9]. Typically, FMTs lack the expression of ER, PR, and ERBB2, and have been considered a remarkable spontaneous model for TNBC [10,11,12,13,14,15,16]. In all three species, mammary tumors exhibit both inter- and intra-tumor heterogeneity as a consequence of genetic and non-genetic aberrations [17]. Over the past 20 years, the investigation of cell differentiation/phenotypic markers has been used in both human and veterinary medicine, primarily to improve our knowledge of the histogenesis of mammary tumors [18]. In the normal human, canine, and feline mammary gland, two cell subpopulations are present: luminal epithelial cells, positive for cytokeratin (CK) 7, CK8, CK18, and CK19; and basal/myoepithelial cells, variably positive for CK5, CK6, CK14, CK17, SMA, calponin, vimentin, and p63 [19]. In HBC, the evaluation of cell differentiation proteins is frequently performed in association with routine diagnostic markers (ER, PR, ERBB2, and Ki-67) to better classify this tumor. The identification of HBC subtypes has a diagnostic, prognostic, and therapeutic value, and is associated with the cell differentiation and epithelial-to-mesenchymal transition (EMT) status of the neoplastic population according to a hierarchical model [20]. EMT is a key event that neoplastic epithelial cells use to acquire a mesenchymal phenotype [21]. As a result, tumor cells obtain the ability to detach from the primary tumor mass, invade the surrounding tissue, migrate throughout the body, and eventually give rise to metastases in distant organs [22]. The classical EMT is characterized by a decreased expression of epithelial markers and a complementary upregulation of mesenchymal markers. Classical EMT transcription factors, namely snail family transcription repressor $\frac{1}{2}$ (SNAIL), TWIST, and zinc-finger-enhancer binding protein $\frac{1}{2}$ (ZEB) are known to orchestrate EMT by regulating cell adhesion, migration, and invasion, also interacting with different signaling pathways and microRNAs [22,23]. Although this is a well-de-scribed process that promotes metastasis formation, accumulating evidence suggests the existence of an intermediate state called partial EMT or hybrid E/M, whereby both epithelial and mesenchymal markers are co-expressed in cancer cells [23,24,25]. The aim of this study was to investigate the mRNA expression of classical EMT-related transcription factors SNAIL, TWIST, and ZEB in human, canine, and feline mammary tumors. Additionally, we studied the expression of key proteins involved in the EMT process, including E-cadherin and vimentin, and of proteins related to the tumor phenotype, such as ER, PR, ERBB2, Ki-67, cytokeratin (CK) $\frac{8}{18}$, CK$\frac{5}{6}$, CK14, and CD44. ## 2.1. Tissue Collection Human samples were collected from the Istituto Oncologico Veneto (IOV, Padua, Italy), whereas canine and feline samples were collected from local veterinary clinics. The human sample collection was approved by the IOV Ethics Committee. All patients or patients’ owners provided informed, written consent to use their samples for this study. Specifically, samples from 5 healthy human mammary gland tissues (MGTs), 5 ER+ HBCs, 5 TNBCs, 4 healthy canine MGTs, 10 canine mammary tumors (CMTs) (5 grade I and 5 grade II), 6 healthy feline MGTs, and 6 grade III FMTs were collected. In this study, to avoid contaminations with other tumor cell subpopulations, we selected only simple tubular carcinomas (STC), which are composed of only one tumor cell subpopulation (luminal epithelial cells) [6]. Healthy MGTs were collected from tumor-bearing patients during the therapeutic/diagnostic surgical procedures, with no additional sampling performed only for the study. Sampling was performed by surgeons. At the time of sampling, most of the tissue was fixed in $4\%$ formaldehyde for histopathology and immunohistochemistry, whereas a peripheral small portion of tumor and normal tissues (approx. 0.5 cm2 each) was collected and preserved in RNALater (Ambion, Austin, TX, USA), according to manufacturer’s instructions. In the lab, before RNA extraction, a small portion of each RNALater-preserved sample was fixed in $4\%$ formaldehyde and embedded in paraffin to check the content of the samples themselves. Four-μm tissue sections were stained with hematoxylin and eosin, and slides were visualized under the microscope to further confirm the presence of healthy tissue in the samples labelled as “healthy” and of tumor tissue in the samples labelled as “tumor”. ## 2.2. RNA Extraction and Real-Time Polymerase Chain Reaction *For* gene expression analysis, a small portion of each tissue sample preserved in RNALater was used for RNA extraction using Trizol Reagent (Invitrogen, Carlsbad, CA, USA), following the manufacturer’s protocol. The extracted RNA was treated with RNAse-free DNAse I (New England Biolabs, Ipswich, MA, USA). Five-hundred ng of total RNA from each sample was reverse transcribed using the RevertAid First Strand cDNA Synthesis Kit (Invitrogen). The cDNA was then used as a template for quantitative real-time PCR using the ABI 7500 Real-Time PCR System (Applied Biosystem) to evaluate the mRNA expression of the following EMT-related genes: SNAIL1, SNAIL2, TWIST1, TWIST2, ZEB1, ZEB2. All the samples were tested in triplicate. ACTB was used as a house-keeping gene. The primer sequences are reported in Table 1. The primers were designed using NCBI Primer-BLAST. To examine primer specificity, the dissociation curves of qPCR products were assessed to confirm a single amplification peak. The qPCR reactions were then purified using the ExoSAP-IT PCR product cleanup (Applied Biosystems) and sequenced at the BMR Genomics (Padua, Italy). The sequences were then verified using the NCBI BLAST database. For data analysis for each sample, the ΔΔCt value was calculated and expressed as a relative fold change (2−ΔΔCt), as described in [16]. Real-time PCR efficiency was calculated by performing a dilution series experiment and applying the following formula to the standard curve: efficiency = 10(−1/slope) − 1 [26,27]. Real-time PCR efficiency was between 90 and $100\%$ for all the samples. ## 2.3. Immunohistochemistry Immunohistochemistry (IHC) was performed on the above-mentioned samples as well as on additional human breast tissue samples from the Division of Anatomic Pathology archive of the University of Padua Hospital, and on additional canine and feline mammary tissue samples from the anatomic pathology archive of the Department of Comparative Biomedicine and Food Science of the University of Padua. Specifically, IHC was per-formed on the following tissue samples: 10 ER+ HBC, 11 TNBCs, 11 CMTs grade I, 11 CMTs grade II, 12 FMTs grade III. Sections (4 μm) were processed with an automatic immunostainer (BenchMark XT, Ventana Medical Systems), as previously described [11]. Briefly, the automated protocol included the following steps: a high-temperature antigen unmasking (CC1 reagent, 60 min), primary antibody incubation (1 h at RT, see below for dilutions), an ultrablock (antibody diluent, 4 min), hematoxylin counterstain (8 min), dehydration, and mounting. Negative controls omitted the primary antibody, whereas adnexa, epidermis, and non-tumor mammary gland, when present, were used as positive controls for CK$\frac{8}{18}$, CK$\frac{5}{6}$, CK14, E-cadherin, vimentin, and Ki-67. For ERBB2, an additional technical external positive control was used (ERBB2 3+ HBC), whereas the species-specific cross-reactivity was previously tested in dogs and cats [10,28]. For ER and PR, feline and canine uterus as well as ovary were also stained as positive controls. For CD44, the lymph node was used as positive control. Positive control tissues, typically collected from necropsies, were derived from the same archive as the canine and feline mammary tumor samples. The following antibodies were tested: anti-ER alpha (anti-ERα) (NCL-ER-6F11 1:40, Novocastra in human and feline species—NCL-ER-LH2 1:25, Novocastra in canine species); anti-PR (NCL-PGR-312 1:80, Novocastra in human and feline species); an-ti-ERBB2 (A0485 1:250, Dako in canine and feline species); anti-CK$\frac{8}{18}$ (NCL-L-5D3 1:30, Novocastra); anti-CK$\frac{5}{6}$ (D$\frac{5}{16}$ B4 1:50, Dako); anti-CK14 (NCL-LL 002 1:20, Novocastra); anti-E-cadherin (610182 1:120, BD Biosciences); anti-CD44 (550538 1:100, BD Biosciences); anti-vimentin (M0725 1:150, Dako); and anti-Ki-67 (M7240 1:50, Dako). In the human species, ERBB2 immunolabeling was performed with Bond Oracle HER2 IHC System for BOND-MAX (Leica Biosystems), containing the anti-ERBB2 antibody (clone CB11, ready-to-use). IHC positivity was semi-quantitatively and separately evaluated by ECVP-boarded (V.Z.) and experienced (L.C.) pathologists. Specifically, cytoplasmic and nuclear positivity were measured as a percentage of positive cells for all markers (100 cells per field in 10 high-power fields were counted). ERBB2 was scored as 0, 1+, 2+, and 3+ according to the American Society of Clinical Oncology (ASCO) 2018 recommendations [29] ($10\%$ cut-off), with 2+ and 3+ cases considered weakly and strongly positive for complete membrane immunolabeling, respectively. The protein expression of the studied markers was evaluated in the epithelial/luminal component. Additionally, immunolabeling was observed in healthy/hyperplastic adjacent mammary tissue, and in this case normal basal/myoepithelial cells were also evaluated. ## 2.4. Statistical Analysis Statistical analyses were performed using Prism version 9.3.1 (GraphPad Software, San Diego, CA, USA). To verify mean differences among groups, either the Student’s t-test or the one-way ANOVA with Tukey’s multiple comparison test was used, when values were normally distributed. A Mann–Whitney test or Kruskal–Wallis test were used when values were not normally distributed. Normality was tested using the Shapiro–Wilk test. The Spearman’s rank correlation analysis was used to analyze associations between variables. The level of significance was set at $p \leq 0.05.$ ## 3.1. Gene Expression We sought to investigate the mRNA expression of the EMT transcription factors SNAIL, TWIST, and ZEB in mammary tumors compared with healthy tissue. In HBC (Figure 1), SNAIL1 showed a higher mRNA expression in TNBCs when compared with ER+ ($p \leq 0.05$). Conversely, the mRNA expression of TWIST1, TWIST2, and ZEB1 in ER+ and TNBCs was significantly lower than in healthy MGTs ($p \leq 0.05$). Additionally, TNBCs had a significantly lower mRNA expression of SNAIL2 and ZEB2 when compared with healthy MGTs ($p \leq 0.05$). In CMTs (Figure 2), SNAIL1 showed a higher mRNA expression in STC II when compared with healthy MGTs ($p \leq 0.01$) and STC I ($p \leq 0.001$). The mRNA expression of SNAIL2, ZEB1, and ZEB2 was lower in tumors than healthy MGTs, although not statistically significant. In FMTs (Figure 3), tumors showed a lower mRNA expression of SNAIL1, SNAIL2, TWIST1, TWIST2, ZEB1, and ZEB2 when compared with healthy MGTs, which was significant only for ZEB1 ($p \leq 0.05$). ## 3.2. Immunohistochemistry Next, we aimed to study the expression of key proteins involved in the EMT process. The expression of the studied markers was evaluated in the tumor epithelial luminal cell population. CD44 and ERBB2 staining was membranous, whereas CK$\frac{8}{18}$, CK$\frac{5}{6}$, CK14, and vimentin staining was cytoplasmic. E-cadherin staining was present in either or both membrane and cytoplasm and it was separately evaluated. Ki-67, ER, and PR staining was nuclear. As expected, epithelial luminal cells of healthy MGT in all three species were diffusely positive for CK$\frac{8}{18}$, membranous E-cadherin, ER, PR, and occasionally positive for CK$\frac{5}{6}$, CK14, and CD44. The basal/myoepithelial cells of healthy MGT in all three species were diffusely positive for CK$\frac{5}{6}$, CK14, CD44, and vimentin, and occasionally also positive for ER and PR. Results for the human, canine, and feline mammary tumors are summarized in Table 2, Table S1 and are graphically represented in Figure 4. In HBC (Figure 4A), ER+ tumors had a high protein expression (roughly $100\%$) of CK$\frac{8}{18}$, whereas they were negative for basal cytokeratins CK$\frac{5}{6}$ and CK14. In TNBCs, the protein expression of CK$\frac{8}{18}$, although fairly heterogeneous, was lower than in ER+ ($p \leq 0.001$) and the protein expression of CK$\frac{5}{6}$ was higher than in ER+ ($p \leq 0.05$). In ER+ tumors the protein expression of E-cadherin was predominantly membranous (Figure 5A), whereas in TNBCs E-cadherin protein expression was often lost from the membrane and pre-dominantly cytoplasmic (Figure 5B). Membranous E-cadherin protein expression was higher in ER+ than in TNBCs ($p \leq 0.001$), whereas cytoplasmic E-cadherin protein ex-pression was higher in TNBCs when compared with ER+ ($p \leq 0.001$) (Figure 4A). Overall, the expression of this protein was quite heterogeneous across the samples. Interestingly, a strong negative correlation between membranous and cytoplasmic E-cadherin protein expression was found in ER+ (r = −1, $p \leq 0.001$) (Figure 4B) and in TNBCs (r = −0.9, $p \leq 0.001$) (Figure 4C). CD44 protein expression was lower in ER+ (Figure 5C) than in TNBCs (Figure 5D), although not statistically significant. Notably, in TNBCs, a strong positive correlation between CK$\frac{5}{6}$ and CK14 expression ($r = 0.8$, $p \leq 0.01$), and a moderate positive correlation between CD44 and vimentin ($r = 0.6$, $$p \leq 0.05$$), were found. All CMTs (Figure 4D) were positive (>$1\%$) for ER and, therefore, classified as ER+. ER protein expression was lower in STC II than in STC I ($p \leq 0.01$). The protein expression of E-cadherin was quite heterogeneous across the samples. As in HBC, a strong negative correlation between membranous and cytoplasmic E-cadherin protein expression was found in the CMTs (r = −0.974, $p \leq 0.001$) (Figure 4E). In addition, in STC II, a strong positive correlation between CK$\frac{8}{18}$ and membranous E-cadherin ($r = 0.8$, $p \leq 0.01$) and a strong negative correlation between CK$\frac{8}{18}$ and cytoplasmic E-cadherin (r = −0.8, $p \leq 0.01$) were found. Interestingly, in STC II, Ki-67 expression was positively correlated with CK$\frac{8}{18}$ ($r = 0.7$, $p \leq 0.05$) and membranous E-cadherin ($r = 0.8$, $p \leq 0.01$) expression, and negatively correlated with cytoplasmic E-cadherin expression (r = −0.7, $p \leq 0.05$). All FMTs (Figure 4D) were negative for ER (<$1\%$), PR (<$1\%$), and ERBB2 (either 0 or 1+), and were therefore classified as triple negative. E-cadherin protein expression was quite heterogeneous. As in the HBCs and CMTs, a strong negative correlation between membranous and cytoplasmic E-cadherin protein expression was found (r = −0.984, $p \leq 0.001$) (Figure 4F). In addition, a strong negative correlation between CK$\frac{5}{6}$ and vimentin expression was found ($r = 0.8$, $p \leq 0.01$). CD44 protein expression was higher in the CMTs (Figure 5E) than in the FMTs ($p \leq 0.001$) (Figure 5F). Vimentin and Ki-67 protein expression was lower in the CMTs than in the FMTs ($p \leq 0.001$) (Figure 6). The expression of the studied markers was not associated with other histopathological features, such as vascular invasion or regional lymph node metastases (data not shown). Moreover, no significant correlations were found between gene and protein expression of the analyzed markers. ## 4. Discussion In this study, we investigated the expression of genes and proteins involved in one of the processes thought to play a major role in cancer progression: epithelial-to-mesenchymal transition [22]. EMT is an evolutionally conserved morphogenetic program during which epithelial cells undergo a series of changes allowing them to acquire a mesenchymal phenotype [21]. During classical EMT, epithelial cells lose the expression of tight junction molecules such as membranous E-cadherin and acquire mesenchymal properties such as migration, invasiveness, and elevated resistance to apoptosis. Transcription factors like SNAIL, TWIST, and ZEB regulate this process and are activated by a variety of signaling pathways, including TGF-α, Notch, and Wnt/β-catenin [30,31,32,33]. SNAIL is a classical regulator of EMT that represses E-cadherin transcription in both mouse and human cell lines [34]. In HBC, it has been associated with tumor recurrence and metastasis [35], and with poor patient prognosis [36]. In contrast to the findings of other authors [37], we found that the mRNA expression of SNAIL2 was significantly lower in TNBCs than in healthy MGTs. In CMTs, SNAIL1 expression was higher in STC II when compared with healthy MGTs and STC I, indicating a possible association of EMT with a higher aggressiveness of these tumors. SNAIL2 in CMTs did not show any difference between healthy MGT and tumor tissue, confirming what other authors have also found [38,39,40]. Conversely, in FMTs, there was a trend such that STC III had a lower mRNA expression of SNAIL1 and SNAIL2 when compared with healthy MGTs. To the best of our knowledge, SNAIL has never been investigated in feline tumors. It is believed that TWIST plays an essential role in cancer metastasis [33]. In HBCs and FMTs, the mRNA expression of TWIST1 and TWIST2 was lower in tumors than in healthy MGTs, which differs from what some authors have found in HBC [41], but is similar to what other authors have found in HBC [42] and in FMTs [43]. ZEB1 has been implicated in carcinogenesis in breast tissue [44] because it enhances tumor cell migration and invasion [45]. In our samples, ZEB1 mRNA expression was lower in tumor than in healthy MGTs, as previously reported by other authors in HBC [42]. Although one study examined the expression of ZEB1 and ZEB2 in five canine mammary carcinoma cell lines [46], to the best of our knowledge, ZEB mRNA expression has never been studied in CMT and FMT tissues. Overall, our data suggest that these transcriptional factors are often downregulated in tumors compared with healthy MGTs, except for SNAIL1 in TNBCs and in CMTs STC II. The RNA isolated from healthy tissues came from the whole mammary gland, which is composed of different cell populations, namely epithelial cells, connective tissue, and fat. Although these transcription factors are barely detectable in normal mesenchymal cells of adult tissues [47], adipose tissue expresses these genes variably [48]. As a result, the mRNA levels of these genes in healthy samples can be dramatically influenced by the presence of non-mammary gland tissues, such as fat. Moreover, it is possible that the number of cells undergoing classical EMT is low when compared with the tumor bulk, which is known to be characterized by a remarkable intra-tumor heterogeneity [22]. Furthermore, some authors believe that these genes are regulated post-transcriptionally [35,49,50,51]. Furthermore, accumulating evidence suggests the existence of cell populations with a hybrid E/M state, which exhibit increased plasticity and metastatic potential, characterized by the co-expression of epithelial and mesenchymal markers [23,24,25,52]. However, the expression of some of these markers may be associated with a complete EMT status, whereas others may be associated with a partial EMT status. For example, it is believed that SNAIL1 is a stronger inducer of complete EMT than SNAIL2, which is rather associated with a hybrid E/M state [53,54]. This suggests that the choice of the markers to be analyzed is fundamental and may help in identifying intermediate EMT states more precisely. In addition, in order to study the EMT process, it would be interesting in the future to investigate the expression of these markers at a single cell level, using single-cell omics approaches such as Laser Capture Microdissection or single-cell RNA sequencing. In the present study, we also assessed the protein expression of several phenotypic as well as EMT-related markers, such as ER, PR, ERBB2, CK$\frac{8}{18}$, CK$\frac{5}{6}$, CK14, E-cadherin, CD44, vimentin, and Ki-67, in a subset of HBCs, CMTs, and FMTs. The HBC ER+ samples showed a high expression of luminal CK$\frac{8}{18}$, and a negative expression of basal CK$\frac{5}{6}$ and CK14, confirming the strong association between ER+ tumors and highly differentiated glandular cells (CK$\frac{8}{18}$+), as well as null expression of basal CKs (CK$\frac{5}{6}$, CK14). In the TNBCs, the protein expression of CK$\frac{8}{18}$ was highly heterogeneous, whereas the expression of CK$\frac{5}{6}$ and CK14 was low in most of the samples. This result, in concordance with another study [55], supports the idea that the terms “basal-like cancer” and “triple-negative breast cancer” are not interchangeable. Indeed, only a small percentage of TNBCs are basal-like [56]. The CMTs were positive for ER, whereas the FMTs were negative for ER, PR, and ERBB2. Despite only a few samples being analyzed, these data suggest, as already proposed by other authors [11,57], a similarity between CMTs and HBC ER+ and between FMTs and TNBCs. In CMTs and FMTs, the protein expression of CK$\frac{8}{18}$, CK$\frac{5}{6}$, and CK14 was highly heterogeneous, confirming the high inter- and intra-tumor heterogeneity [16,57]. Basal CK14 protein expression was higher in FMTs than in CMTs, confirming that FMTs are more “basal-like” when compared with CMTs [11,12]. E-cadherin is a cellular adhesion molecule, and its disruption may contribute to the enhanced migration and proliferation of tumor cells, leading to invasion and metastasis [58,59,60,61,62]. In our samples, E-cadherin protein expression was evaluated in the membrane and in the cytoplasm of tumor cells, separately. Overall, the expression of E-cadherin was highly heterogeneous across the samples of the three species, confirming once more the high inter-tumor heterogeneity of mammary cancer in the three species. In human ER+ tumors, E-cadherin protein expression was predominantly membranous, whereas in TNBCs it was predominantly cytoplasmic, confirming that the delocalization of the protein is associated with increased tumor aggressiveness [56,63]. These results confirm that it is not only the loss of E-cadherin that correlates with increased tumor aggressiveness, but also the protein translocation from the membrane to the cytoplasm, as already described [64,65,66,67]. Together with E-cadherin, CD44 has been extensively studied in tumor cell differentiation, invasion, and metastasis, and is thought to be involved in the EMT process in HBC [68,69]. Although a few studies on HBC have shown that protein overexpression of CD44 is associated with poor prognosis and metastasis [70], others have shown that downreg-ulation of its expression is correlated with an adverse outcome [68,71]. For this reason, the role of CD44 in the behavior and prognosis of HBC is controversial [71,72]. In our study, CD44 expression was heterogeneous and lower overall in ER+ tumors compared with TNBCs. This trend agrees with study findings by Klingbeil and collaborators, who found high levels of CD44 expression in tumors with a basal-like or triple-negative phenotype, suggesting an association of this protein with an aggressive phenotype in HBC [73]. CD44 was highly expressed (roughly $85\%$) in our CMT samples, regardless of the tumor grading, as well as in the healthy mammary gland tissues. Moreover, other authors found no differences between benign CMTs, malignant CMTs, and normal mammary gland tissues, suggesting that CD44 is not associated with aggressiveness in canine mammary tumors [74,75,76,77,78]. In FMTs, the expression of CD44 was low overall (approximately $5\%$). Sarli and collaborators evaluated the intramammary/intratumoral and extramammary/extratumoral expression of CD44 in feline normal mammary tissues, benign tumors, and malignant tumors in relationship to lymphangiogenesis [79]. They found that CD44 had a significantly higher expression in intramammary/intratumor areas compared with extramammary/extratumor areas in both benign and malignant tumors. Additionally, no statistically significant differences in CD44 expression between normal mammary gland, benign tumors, and malignant tumors were found. To the best of our knowledge, no other studies on CD44 expression in FMT tissues are present within the literature. These data, together with our findings, suggest that CD44 is not a useful marker of malignancy in cats. Another protein that is well-studied and plays a central role in the EMT process, and therefore in tumor invasion and metastasis, is vimentin [51]. Vimentin is one of the major intermediate filament proteins and is ubiquitously expressed in normal mesenchymal cells [80]. Recent studies have reported that vimentin knockdown causes a decrease in genes linked to HBC metastasis, such as the receptor tyrosine kinase Axl [81]. In our study, we also evaluated the expression of vimentin in HBCs, CMTs, and FMTs. We found a higher expression of vimentin in TNBCs compared with ER+, although not statistically significant. This result suggests that vimentin expression is associated with the triple-negative subtype, aggressive behavior, and a poor prognosis of HBC, as previously reported by many authors [82,83,84,85]. In CMTs, vimentin expression is low (approximately $15\%$), con-firming the low aggressiveness of mammary tumors in dogs, which is in concordance with the findings of other authors [86]. Conversely, in FMTs, the expression of vimentin, although heterogeneous, was quite high (approximately $70\%$), suggesting the high aggressiveness of mammary tumors in this species [9], as well as their similarities with TNBCs [11]. Unfortunately, as a limitation of this study, only grade I and II CMTs were included. No RNALater-sampled canine tumors were diagnosed as grade III. For possible IHC analyses in our archive of paraffin-embedded tissues, a very limited number of grade III simple CMTs were found (14 cases over five years) that were often already vascular/lymph node invasive ($\frac{10}{14}$). This study would not benefit much from adding only IHC analysis of grade III CMTs that already have invaded the vascular system or with metastases. We still believe that the study allowed the collection of some new data on the most frequent FMTs and CMTs in comparison with HBC samples assessing both gene and protein expression. ## 5. Conclusions In summary, this study showed that most of the classical EMT-related transcription factors SNAIL, TWIST, and ZEB are downregulated in tumor tissues compared with healthy tissues, although additional analyses should be performed to better investigate them in neoplastic clones and in a larger set of samples. IHC analyses indicated a potential role of some markers, namely vimentin and E-cadherin, but not of others (i.e., CD44) as indicators of EMT (including loss of cell differentiation and increased malignancies). Moreover, all the IHC data seem to support the already proposed similarities between FMTs (grade III) and TNBCs, as well as between CMTs (grade I and II) and ER+ HBCs. The two species are widely discussed as potential spontaneous models of specific HBC subtypes [11,12,15,16,57,87,88,89,90].
# Effects of Different Phospholipid Sources on Growth and Gill Health in Atlantic Salmon in Freshwater Pre-Transfer Phase ## Abstract ### Simple Summary Optimal nutrition is important for Norwegian-farmed Atlantic salmon in the challenging early seawater phase, which shows a higher mortality leading to significant economic losses. Phospholipids are reported to enhance growth, survival, and health in the early stages of the fish life. Atlantic salmon (74 to 158 g) were fed six test diets to evaluate alternative phospholipid (PL) sources in freshwater and were transferred to a common seawater tank with crowding stress after being fed the same commercial diet up to 787 g. Krill meal (KM) was evaluated using dose response with the highest $12\%$ KM diet compared against $2.7\%$ fluid soy lecithin and $4.2\%$ marine PL (from fishmeal) diets, which were formulated to provide the same level of added $1.3\%$ PL in the diet similar to base diets with $10\%$ fishmeal in the freshwater period. A trend showing increased weight gain with high variability was associated with an increased KM dose in the freshwater period but not during the whole trial, whereas the $2.7\%$ soy lecithin diet tended to decrease growth during the whole trial. No major differences were observed in liver histology between the salmon that were fed different PL sources during transfer. However, a minor positive trend in gill health based on two gill histology parameters was associated with the $12\%$ KM and control diets versus the soy lecithin and marine PL diets during transfer. ### Abstract Growth and histological parameters were evaluated in Atlantic salmon (74 g) that were fed alternative phospholipid (PL) sources in freshwater (FW) up to 158 g and were transferred to a common seawater (SW) tank with crowding stress after being fed the same commercial diet up to 787 g. There were six test diets in the FW phase: three diets with different doses of krill meal ($4\%$, $8\%$, and $12\%$), a diet with soy lecithin, a diet with marine PL (from fishmeal), and a control diet. The fish were fed a common commercial feed in the SW phase. The $12\%$ KM diet was compared against the $2.7\%$ fluid soy lecithin and $4.2\%$ marine PL diets, which were formulated to provide the same level of added $1.3\%$ PL in the diet similar to base diets with $10\%$ fishmeal in the FW period. A trend for increased weight gain with high variability was associated with an increased KM dose in the FW period but not during the whole trial, whereas the $2.7\%$ soy lecithin diet tended to decrease growth during the whole trial. A trend for decreased hepatosomatic index (HSI) was associated with an increased KM dose during transfer but not during the whole trial. The soy lecithin and marine PL diets showed similar HSI in relation to the control diet during the whole trial. No major differences were observed in liver histology between the control, $12\%$ KM, soy lecithin, and marine PL diets during transfer. However, a minor positive trend in gill health (lamella inflammation and hyperplasia histology scores) was associated with the $12\%$ KM and control diets versus the soy lecithin and marine PL diets during transfer. ## 1. Introduction Farmed salmon are typically transferred from early phase production in tanks on land to seawater cages that constitutes a challenging environment, where fish can experience significant mortality before reaching harvest size. For example, mortality in Atlantic salmon ranged from 15 to $16\%$ from 2017 to 2021 in Norway, with approximately $35\%$ of sea cage mortality occurring in the first 0–3 months at sea for the 2010–11 salmon generations in the Norwegian-farmed Atlantic salmon [1]. This mortality in the early sea cage phase leads to significant economic loss [2]. Thus, research on optimal nutrition to produce robust smolts for improved survival and growth after transfer to the sea cage is of interest to the aquaculture industry [3]. Fish meal (FM) and fish oil (FO) dominated early commercial salmon feed formulations and provided essential nutrients, but usage of these marine ingredients has declined over time as they are limited resources at generally higher prices compared to alternative ingredients where sustainability measures are also considered [4]. Antarctic krill meal (KM; Euphausia superba) is a commercially known ingredient in salmon feeds, with potential benefits toward enhancing growth and health in salmonids [5]. The krill fishery in the Antarctic Southern *Ocean is* considered highly regulated and sustainable [6,7]. KM provides a range of nutrients including proteins (similar amino acid profile to FM); water soluble nitrogenous components (free amino acids, peptides, nucleotides, and trimethylamine N-oxide), which can act as potential feed attractants; astaxanthin; marine omega-3 fatty acids (eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA)); and phospholipids (PLs) [5]. Substantial evidence exists showing that dietary PL can improve growth, survival, and health (reduced intestinal steatosis and deformities) in the larval and early juvenile stages of the fish [8,9,10,11]. In addition, KM and krill oil (KO) reduced fat accumulation in the hepatocytes in comparison to soybean lecithin as the PL sources in the diet of seabream larvae [10,12,13]. In addition, there was an indication that seabream juveniles that were fed a diet with $9\%$ KM had lower hepatocyte vacuolization (fat storage) versus a control diet without KM that was higher in fishmeal [12,13], and a non-significant trend for lower hepatocyte vacuolization was indicated for seabream larvae that were fed a diet with krill oil versus soybean lecithin as the PL source [10]. PLs from different sources can have different properties. KM has approximately $40\%$ PL consisting of the total lipid with phosphatidylcholine (PC) at >$80\%$ of the total PL and ca. $18\%$ EPA + DHA of the total lipid [14]. In comparison, fluid soy lecithin can have approximately $46\%$ PL of product (does not include glycolipids and complex sugars) with ca. $35\%$ PC of the total PL and ca. $55\%$ 18:2n-6 of the total FA as the major FA with no EPA + DHA [15]. KM has been documented in the diet of seawater salmon [16,17,18], however, only KO has been documented in the diet of freshwater salmon during the pre-transfer to the seawater phase [19]. The objective of the present study was to document the effect of the KM dose as a source of PL and compare it against other PL sources in the feed of freshwater Atlantic salmon during the pre-transfer phase followed by the early seawater phase by evaluating the growth and histological health parameters. A four-level graded dose response for KM up to $12\%$ of the diet along with a comparison of alternative PL sources (soy lecithin and marine PL from fishmeal) formulated to provide the same level of added $1.3\%$ PL in the diet as $12\%$ KM was evaluated in freshwater diets for salmon during the pre-transfer phase. Fish identified by pit tag with this pre-transfer freshwater feeding history were then transferred to a common seawater tank with crowding stress after transfer and a drop in water temperature at transfer (crowding and water temperature drop can be experienced at transfer commercially) and then were fed the same commercial feed. Gill and liver histology were also compared for salmon that were fed the alternative PL source diets at the end of the freshwater pre-transfer period. ## 2.1. Feed Formulation and Composition Three different sources of PL were tested in pre-transfer freshwater feeds: (i) krill meal (QrillTM Aqua; Aker BioMarine Antarctic ASA) at four levels for dose response ($4\%$, $8\%$, and $12\%$ of diet), (ii) fluid soy lecithin as a vegetable PL source, and (iii) marine phospholipid-rich oil sourced from North Atlantic fish species from Triple 9 (TripleNine, Trafikhavnskaj 9, DK-6700 Esbjerg, Denmark))., and a control diet. The trial diets are referred to as Control, KM4, KM8, KM12, VegPL, and MarPL, respectively. Trial feeds were formulated using a commercial formulation program with external oil mix calculations and produced by extrusion at Cargill Innovation Center (Dirdal, Norway) for ca. 74 g fish with lipid nutrients and then adjusted for purposes of the trial. The 4-mm pre-transfer freshwater trial feeds were formulated and analyzed to have similar digestible energy (22.1–23.6 MJ/kg gross energy), protein (46–$49\%$ range), and fat (22–$24\%$ range) (Table 1) and with similar calculated $1.1\%$ EPA + DHA in diet, 15–$16\%$ saturated in total FA, and 1.3 n-6/n-3 fatty acid (FA) ratio across trial feeds. Protein was analyzed by the Dumas principle using the Elementar Rapid Max N system. Fat was analyzed by low-field nuclear magnetic resonance scan using the NMR Analyzer Bruker minispec mq10 system (Cargill Innovation Center, Dirdal, Norway). Gross energy was analyzed by the Leco gross energy bomb calorimetry system (Cargill Innovation Center, Dirdal, Norway). Moisture was predicted by the NIR FOSS DS2500 system (Cargill Innovation Center, Dirdal, Norway) by using the feed model at Cargill. A similar $1.3\%$ PL in diet across pre-transfer freshwater diets was calculated from the addition of $12\%$ krill meal, fluid soy lecithin, and marine PL test ingredients to base formulations with the same $10\%$ fishmeal level across the diets. There was variation in the other ingredients (added oil, plant ingredients, and micronutrients) needed for balancing or reaching nutrient targets. The choline level was formulated to be the same for control and VegPL diet with MarPL and KM12 providing additional choline to these diets in the form of phosphatidylcholine (PC). However, formulated choline levels for control diet and fluid soy lecithin diets were in excess of the NRC 2011 requirements for salmonids and in excess of the lowest choline level used by Hansen and coworkers [20] with no growth differences observed (1340 to 4020 mg choline/kg diet dose response trial for 456 g initial weight salmon). Lipid accumulation in the gut was reduced for salmon (456 g initial weight) at increased choline levels [20]. The formulation and composition of feeds are given in Table 1. ## 2.2. Fish Trial Conditions The experiment was performed according to the guidelines and protocols approved by the European Union (EU Council $\frac{86}{609}$; D.L. 27.01.1992, no. 116) and by the National Guidelines for Animal Care and Welfare published by the Norwegian Ministry of Education and Research. Atlantic salmon (Salmo salar) with an initial weight of ca. 67 g were used for the trial. The fish were pit-tagged and randomly distributed into 24 freshwater flow-through tanks (1 m diameter and 0.45 m3 volume) to have 40 fish per tank at the start of trial diet feeding. These fish after 15 days of tank acclimation were 74 ± 12 g (average ± SD for all 960 fish in 24 tanks at the start of trial feeding) and then were fed the freshwater pre-transfer trial diets (Table 1) over a 53-day period. Water temperature averaged 14.3 °C (13.3–15.3 °C range) with $107\%$ average oxygen saturation at the inlet and $90\%$ oxygen saturation at the outlet during the freshwater acclimation and trial diet feeding period. Fish were fed the six trial diets to four replicate tanks during the 53-day freshwater pre-transfer period using an automatic belt feeder with continuous feeding for 20 h per day in excess of satiation level. Feed intake was calculated on a weekly basis by collecting and weighing uneaten pellets as well as by weighing the amount fed. There was a 12 h light: 12 h dark photoperiod regime from Day 0 at freshwater tank acclimation to Day 33 after which a 24 h light regime was used to initiate smoltification. After this freshwater pre-transfer feeding period, fish from all the tanks (17–20 fish per tank from the 24 freshwater tanks) were transferred to a larger common seawater flow-through tank (5 m diameter and 21.6 m3 volume with 28.5 ppt salinity, and no acclimation time from 0 ppt freshwater to 28.5 ppt seawater) with a water temperature drop at transfer (ca. 14 to 9 °C) and crowding stress after transfer (lowered water level to ca. 20 cm for one hour with supplemental oxygen for all 459 fish of ca. 167 g within a ca. 0 to 30 h period after transfer) in the common seawater tank after all 17–20 fish per tank from the 24 freshwater tanks fish were transferred over and then were fed a common commercial extruded salmon diet (EWOS AS) for a further 98 days. Daily water temperature was lower during the seawater phase averaging 9.4 °C (8.5–11.1 °C range). ## 2.3. Fish Growth The 40 fish per tank were weighed individually with pit-tag identification on acclimation to the freshwater tanks (Day 0), at the start of trial diet feeding (Day 15), at intermediate weighing (Day 33), and after 53 days of trial feeding in the freshwater (Day 68). The fish weight gain in the freshwater pre-transfer period from Day 15 (start of freshwater trial diet feeding) to 68 were compared statistically between diets. A total of 17–20 fish from each of the 24 freshwater tanks were transferred to the common seawater tank on Day 68 with fish weighing performed on Days 35, 73, and 98 after transfer to seawater. There were 9 to 17 fish representing the original tanks in the freshwater period with 50 to 58 fish representing each of the test diets from the freshwater period at final weighing in seawater at 98 days after transfer to the common seawater tank. The fish weight gain over the whole trial period in freshwater and seawater from Day 15 to 166 days were statistically compared between diets. ## 2.4. Hepatosomatic Index Hepatosomatic index (HSI) is the liver weight percent of the whole body weight. HSI was measured on 10 fish randomly sampled per tank (four tank replicates per diet) to study 40 fish per diet at the end of the freshwater pre-transfer period when fed test diets and 40 fish per diet (identified by pit-tag) at the end of the seawater phase when fed the common commercial diet. ## 2.5. Histology Gill and liver histology were performed on the fish involved in the dietary phospholipid source comparison (KM12, VegPL, and MarPL) and on fish fed the Control diet at the end of the freshwater pre-transfer period. Liver (half tissue section) and gill (left gill arch 2) tissues were randomly sampled from five fish per tank to give a total of 20 liver and 20 gill tissues per diet group for histological analysis. The tissues were fixed in formalin ($4\%$ formaldehyde) and stored at room temperature until sent to Pharmaq Analytiq AS (Harbitzallée 2A, 0275 Oslo, Norway) for histological analysis. ## 2.6. Statistical Analysis The weight gain for the different periods was modelled by computing the weight gain of each tagged individual and then using a hierarchical generalized additive model (GAM) with the spline function to describe the possibly non-linear dose-response. A random effect of tank was added to the model to account for the multiple individual observations per experimental unit. The total feed intake over the periods of interest was modelled with a single level GAM with a spline function describing the dose-response function. Hepatosomatic index (HSI) was modelled by a hierarchical GAM model using a spline function to describe the dose-response function, mean-centered round weight of the fish as a covariate, and a random effect of tank to account for the multiple individual observations per tank. From this model the expected liver weight was solved for an average-sized sampled fish and expressed as HSI by dividing the expected liver weight with the mean round weight of the sample. Gill and liver histology scores are ordinal variables for which common arithmetic operations, such as sum or mean, are not defined and therefore scores require an ordinal model returning the score probability for evaluation. A hierarchical GAM for ordinal data was set up by using a spline function to describe the dose-response function, and a random effect of tank was included to account for multiple individuals observed per tank. The models for weight gain, feed intake, and HSI assumed the error distribution is the normal distribution, and the model for gill and liver scores assumed the model is ordinal and the errors followed the ordered categorical family. All data processing and statistical modelling was conducted with the R language [21]. The GAMs were estimated with the “gam” function of the R language add-on package “mgcv” [22]. The outcomes from the fitted statistical models are presented graphically by showing the mean response and the $95\%$ credible intervals. The mean (median) response and the $95\%$ credible intervals were computed with the help of a parametric bootstrap (with 10,000 random draws per parameter) by taking the $25\%$, $50\%$, and $97.5\%$ quantiles of the computed response vector. In the case of a categorical predictor variable (for comparing the different PL sources), the graphs show the mean and an error bar of the $95\%$ credible interval. In the case of a continuous predictor (for the dose-response of krill meal inclusion), the mean response is shown as a median dose-response curve and the $95\%$ credible interval is shown as a confidence band around the mean curve. This way both the magnitude of any potential effect (biological significance) and the uncertainty of any effect estimate (statistical significance) can be shown in the same graph for all the results independent of the response following the normal, binomial, or ordered categorical distribution. ## 3.1. Growth Performance Atlantic salmon of 74 g (overall tank average) were fed the six test diets up to 158 g (overall tank average), growing 2.1-times the initial fish weight to the end of the freshwater pre-transfer period. There was no clear trend for increased feed intake with KM dose in the FW pre-transfer phase (Figure 1). A trend for increased feed intake was indicated for the Control and KM12 diets compared to the MarPL and VegPL diets in the PL source comparison for the FW pre-transfer phase (Figure 2). There was overall high variability for the feed intake comparisons. A trend for increased fish weight gain with high variability was indicated with increased KM dose in the FW phase (Figure 3). There was similar weight gain during the whole trial with feeding the KM dose in the FW pre-transfer phase followed by feeding the same commercial diet in a common tank for the SW phase (Figure 4). Fish fed the KM12 diet had increased weight gain compared to the VegPL diet with the MarPL and Control diets having intermediate weight gains in the PL source comparison for the FW pre-transfer phase (Figure 5). Weight gain was similar for the fish that were fed KM12, MarPL, and Control diets, with a trend for higher indicated weight gain than the VegPL group during the whole trial, with feeding the KM dose in the FW pre-transfer phase followed by feeding the same commercial diet in a common tank for the SW phase (Figure 6, Tables S1 and S2). ## 3.2. Hepatosomatic Index A trend for decreased hepatosomatic index (HSI; liver% of fish weight) was indicated for the fish that were fed increased KM dose from 0 to $12\%$ of diet at the end of the freshwater pre-transfer feeding phase (Figure 7). There was no decrease in HSI with feeding KM dose at the end of the whole trial after the FW pre-transfer phase followed by feeding the same commercial diet in a common tank for the SW phase (Figure 8). A lower HSI was indicated for the fish that were fed the KM12 diet compared with the fish that were fed the MarPL, VegPL, and Control diets at the end of the freshwater pre-transfer feeding phase (Figure 9) with a similar minor HSI trend observed over the whole trial (Figure 10). ## 3.3.1. Gill Histology An increased probability for very mild to mild gill lamella inflammation and hyperplasia score was indicated for the salmon that were fed the VegPL and MarPL diets compared to the Control and $12\%$ KM diets at the end of the freshwater pre-transfer phase after 53 d of feeding the trial diets (Figure 11a,b). Other following gill histology responses were evaluated with no major differences between the diets: vascular lesions, filament inflammation, necrosis of respiratory epithelium, necrosis affecting deeper tissues, fusion of lamella,and other lesions noted as present or absent. ## 3.3.2. Liver Histology No major differences were observed in liver histology between the control, $12\%$ KM, soy lecithin, and marine PL diets at the end of the FW pre-transfer phase after 53 d of feeding the trial diets (data not shown). The following liver histology responses were evaluated: total amount of abnormal tissue, inflammation, necrosis, inflammation in liver tissue or capsule (peritonitis), peribiliary or perivascular inflammation, neoplasia, fibrosis, lipid deposition, other degenerative changes, vascular lesions, and other lesions noted as absent or present. ## 4. Discussion The present study evaluated the effect of different phospholipid sources fed over 53 d in the freshwater pre-transfer phase followed by feeding the same commercial diet over 98 d in a common seawater tank on growth performance and health parameters of Atlantic salmon. KM was evaluated in dose response ($4\%$, $8\%$, and $12.0\%$ of diet), and diets with $2.7\%$ fluid soy lecithin (VegPL) and $4.2\%$ MarPL as alternative PL sources were formulated to provide the same level of added $1.3\%$ PL in diet as $12\%$ KM. All the test diets contained $10\%$ fishmeal in the FW phase. A trend was indicated for increased fish weight gain (high variability) with increased KM dose in the FW pre-transfer phase but a carry-over effect on growth was not observed for the same salmon fed the same commercial diet after seawater transfer. Salmon (104 g initial weight) that were fed krill meal at 7.5 and $15\%$ of diet for higher fishmeal diets (40–$52\%$ of diet range) than the current trial had increased growth after transfer to sea cage [16]. Fishmeal provides PL, so higher fishmeal diets may reduce the need for KM as a PL source [23]. However, KM also provides amino acids (protein), water-soluble nitrogenous components (potential feed attractants), astaxanthin, and EPA + DHA, hence, it is more than a PL source. KM feeding may need to continue after sea water transfer to have a positive effect on growth at the end of the trial, noting the positive effects of KM on salmon growth observed in other but not all trials, which can depend on life stage and challenges, diet composition, KM refining (de-shelling etc.), and inclusion level [5]. A trend for decreased fish weight gain was indicated for the VegPL diet in the FW phase and over the whole trial compared with the control diet, whereas the MarPL diet showed more similar growth to the control diet over the whole trial, noting that only one PL level tested for MarPL and fluid soy lecithin matched that provided by KM12, so optimal dose was not evaluated. The choline level was formulated to be the same for the control and VegPL diets with KM12 and MarPL providing additional choline to these diets in the form of phosphatidylcholine (PC). Formulated choline levels for the control diet and fluid VegPL diets were in excess of the NRC 2011 requirements for salmonids and in excess of the lowest choline level used by Hansen et al. in 2020 with no growth differences observed (1340 to 4020 mg choline/kg diet dose response trial for 456g initial weight salmon) [20]. Lipid accumulation in the gut was reduced for these salmon (456 g initial weight) at increased choline levels [20]. Effects of increased choline with KM inclusion cannot be ruled out and further research would be needed to separate choline from PL effects for these smaller pre-transfer salmon (74 to 158 g fish weight) that were fed lower fat pre-transfer diets (22–$24\%$ fat) than during the seawater growth with choline requirements for reducing the lipid accumulation in the intestine, potentially dependent on dietary fat level [20]. Higher growth was generally observed for PL provided by KO over soy lecithin at various PL doses for the first-feeding stage of salmon, but this growth trend was not consistent at various PL doses over the whole trial from the first-feeding to smolt [19]. PL from KO was indicated to be more effective than fluid soy lecithin for reducing intestinal steatosis in smaller salmon (2.5 g salmon, but no steatosis observed across diets for 10–20 g salmon) and low level of vertebral deformities [19]. Marine PL sources (FM and KO) were also compared against soy lecithin at a similar ca. $3.5\%$ PL of diet level for the first-feeding Atlantic salmon (0.14 g initial weight) with these PL sources, giving similar growth to ca. 2.4 g final fish weight with no conclusive mortality or intestinal histology differences between PL sources but these parameters were generally improved for the PL source diets with higher PL compared to the control diets with lower PL. An uncertain observation of higher average growth was indicated for the marine PL sources over soy lecithin at intermediate weighing for salmon at ca. 0.6 g [24]. Effects of PL cannot be isolated from KM but the increased growth for KM12 over the VegPL diet in the pre-transfer phase may be due to PL, choline, water soluble nitrogenous components, etc., noting that there was also an indicated trend for decreased growth of VegPL versus the control diet in the pre-transfer phase. Addition of KM did not give a clear increase in feed intake compared to the control diet and there was an indicated trend of decreased feed intake for the MarPL and VegPL diets, but strong conclusions cannot be made due to the high variability. Feed intake can only be measured on a tank basis, so it was not possible to estimate feed intake of fish with different pre-transfer freshwater feeding histories in a common tank that were fed the same diet in the seawater phase. A trend for decreased hepatosomatic index (HSI) was indicated with increased KM inclusion and for the $12\%$ KM diet versus the other PL sources added to provide the same PL level in the pre-transfer phase, but the effect of KM on decreasing HSI was not carried over into the seawater phase with fish that were fed the same diet in a common tank (Figure 7, Figure 8, Figure 9 and Figure 10). There was no difference in the liver lipid droplet accumulation based on histology (normal scores only) for salmon that were fed the diets containing different PL sources at the end of the freshwater pre-transfer period. The lower HSI in KM12 could be due to the positive effects from krill PL (and choline) on the lipid transport and deposition in organs, with this effect of feeding $12\%$ KM to Atlantic salmon documented by [17] with less pale livers and reduced liver fat. The authors further supported this observation with a significantly higher expression of the cadherin 13 (Chd) gene in the $12\%$ KM group associated with circulating levels of the adipocyte-secreted protein adiponectin that has potential anti-inflammatory effects and plays an important role in metabolic regulation and is associated with the fatty liver index in humans [25]. However, Chd expression was not studied in the current study, and hence, further studies are warranted to explore the association between Chd expression, his, and absolute fat accumulation in the liver in salmon. Increased choline, which KM provided in this trial, was shown to reduce fat accumulation in the intestine of Atlantic salmon [20]. Choline supplementation was also indicated to reduce HSI in Atlantic salmon, but this was not reflected in lower liver fat or histological vacuolization, noting that there are variable trends of dietary choline deficiency on the liver fat level of fish reported in the literature [26]. PL from KO was indicated to be more effective than fluid soy lecithin for reducing intestinal steatosis in smaller salmon (2.5 g salmon but no steatosis observed across diets for 10–20 g salmon). Further studies are required to associate higher liver fat with welfare in salmon. Gills are one of the most vital organs of fish, due to their function in respiration, osmoregulation, excretion of nitrogenous waste, pH regulation, and hormone production [27]. Gill health has become one of the most significant health and welfare challenges in the salmon aquaculture industry in Norway, Scotland, and Ireland [28,29,30]. The gill disorders are generally complex and multifactorial and are related to both biological factors, such as parasites and pathogens, handling stress, treatments, or due to the environmental factors, such as temperature, salinity, algal blooms, etc. Hence, the gill diseases are challenging to prevent and control and lead to high mortality, reduced production performance, and impaired fish welfare, cumulating in huge economic losses [31]. There were no differences reported for histological parameters investigated except in the presence of ectopic epithelial cells containing mucus in the lamina propria in the hindgut (potential inflammatory marker) of salmon (grown from 2.3 to 3.9 kg in sea cages) that were fed $15\%$ fishmeal diet but not for $12\%$ KM of diet in a $5\%$ fishmeal diet, which may suggest anti-inflammatory effects of KM [17]. KM provides astaxanthin (166 mg/kg in the KM used for the present study) to the diet as a natural antioxidant with potential anti-inflammatory properties [32]. KM and MarPL also provide EPA + DHA attached to PL, which may affect bioavailability of EPA + DHA for use in cell membranes and inflammatory response [33] but this is not documented in fish. In the current study, there was decreased probability for very mild to mild gill lamella inflammation and hyperplasia scores indicated in salmon that were fed $12\%$ KM compared to the soy lecithin and marine PL diets but gill histology for salmon that were fed the $12\%$ KM diet was similar to the control diet without KM (Figure 5). ## 5. Conclusions Overall, increased KM tended to increase growth (high variability), whereas the VegPL diet tended to decrease growth compared to the control diet in the FW pre-transfer phase. The positive growth trend indicated for KM fed pre-transfer was not carried over into the seawater phase for fish fed the same diet. A minor positive trend in gill health (lamella inflammation and hyperplasia histology scores) was indicated for the $12\%$ KM and Control diets compared with the VegPL and MarPL diets in the FW pre-transfer phase. Hepatosomatic index tended to decrease with KM fed in the pre-transfer phase, noting that all livers evaluated by histology were considered normal for lipid droplet accumulation. Only one VegPL and MarPL dose was tested, so dose effect of these PL sources and comparison with krill oil to better isolate the PL effect from other nutrients in KM as well as a post-transfer feeding comparison of these PL sources could be areas to research further in transfer diets for salmon.
# Effects of Bacillus licheniformis and Combination of Probiotics and Enzymes as Supplements on Growth Performance and Serum Parameters in Early-Weaned Grazing Yak Calves ## Abstract ### Simple Summary This study was conducted to investigate the effects of dietary supplementation with *Bacillus licheniformis* and a combination of probiotics and enzymes on the growth and blood parameters of grazing yak calves. The body weight, body size, serum biochemical parameters, and growth hormone levels of grazing yaks were assessed. We found that supplementation with probiotics alone or with a combination of probiotics and enzymes significantly increased the average daily gain, compared to the controls, and the combination of probiotics and enzymes showed a better performance. Supplementation with the complex of probiotics and enzymes significantly increased the concentration of serum growth hormone, insulin-like growth factor-1, and epidermal growth factor, which may be the main reason for the higher daily weight gain. The findings of this study may help improve the growth efficiency of yak calves on the Qinghai–Tibetan Plateau. ### Abstract Early weaning is an effective strategy to improve cow feed utilization and shorten postpartum intervals in cows; however, this may lead to poor performance of the weaned calves. This study was conducted to test the effects of supplementing milk replacer with *Bacillus licheniformis* and a complex of probiotics and enzyme preparations on body weight (BW), size, and serum biochemical parameters and hormones in early-weaned grazing yak calves. Thirty two-month-old male grazing yaks (38.89 ± 1.45 kg body weight) were fed milk replacer at $3\%$ of their BW and were randomly assigned to three treatments ($$n = 10$$, each): T1 (supplementation with 0.15 g/kg Bacillus licheniformis), T2 (supplementation with a 2.4 g/kg combination of probiotics and enzymes), and a control (without supplementation). Compared to the controls, the average daily gain (ADG) from 0 to 60 d was significantly higher in calves administered the T1 and T2 treatments, and that from 30 to 60 d was significantly higher in calves administered the T2 treatment. The ADG from 0 to 60 d was significantly higher in the T2- than in the T1-treated yaks. The concentration of serum growth hormone, insulin growth factor-1, and epidermal growth factor was significantly higher in the T2-treated calves than in the controls. The concentration of serum cortisol was significantly lower in the T1 treatment than in the controls. We concluded that supplementation with probiotics alone or a combination of probiotics and enzymes can improve the ADG of early-weaned grazing yak calves. Supplementation with the combination of probiotics and enzymes had a stronger positive effect on growth and serum hormone levels, compared to the single-probiotic treatment with Bacillus licheniformis, providing a basis for the application of a combination of probiotics and enzymes. ## 1. Introduction Yaks (Bos grunniens) occur on the Qinghai–Tibet Plateau at high altitudes and with long cold seasons and limited pasture resources. This species is a unique product of long-term natural selection, providing local herders with the most basic living materials and livelihood resources, such as meat, milk, shelter (hides and furs), and fuel (dung), and is an indispensable part of the ecology and economy of the Qinghai–Tibetan Plateau [1]. However, the low reproductive rate of yaks seriously restricts their production and utilization. The cold season on the Tibetan Plateau lasts for eight months (October to the following May), during which time the quantity and quality of pasture decrease below the nutritional requirements of lactating yaks [2]. The deficiency of feed intake results in a negative body energy balance and metabolic stress [3]. On the other hand, under traditional grazing management, plateau-grazing yak calves are weaned naturally or artificially under various conditions at an age of 18–24 months [4], rather than the weaning age of domestic beef cattle (<6 months). The slow recovery itself and the late weaning of yak calves, which result in a poor postnatal physical condition, severely affect the onset of the next estrous cycle in the cow. Most yaks exhibit a long postpartum anestrous period and calve twice every 3 years or once every 2 years [5]. Therefore, the early weaning of yak calves may help mitigate these adverse effects. Early weaning has become more popular in recent years for various reasons, including the better use of limited feed resources and alleviating grazing pressure on pastures by reducing the nutritional needs of cows [6]. Weaning calves before the start of the breeding season improves the reproductive performance of cows [7,8] because the cows can regain their weight faster, thus accelerating the onset of postpartum estrus. The use of milk replacer in early weaning is common in livestock production [9,10]. The milk replacer has demonstrated positive benefits in animal experiments, such as improved immunity and relieved weaning stress response [11]. Increasing evidence suggests that enhanced milk replacer feeding is beneficial for improving gut microbial development and growth performance in early-weaned lambs [12,13]. Over the past few decades, probiotics have been widely used in livestock and poultry production for their ability to enhance animal disease resistance, improve feed utilization, and improve growth performance [14]. In ruminants, yeasts and bacteria, including Lactobacillus, Bifidobacterium, Bacillus, Propionibacterium, and Enterococcus, alone or in combination, are used as additives in diets [15,16]. Probiotics can decrease diarrhea, improve production and feed utilization efficiency, and strengthen the immunity system in young ruminants [17,18,19]. Moreover, supplementation with probiotics improves the rumen and intestinal epithelial cell growth, which enhances the gastrointestinal tract development and health status of calves [17,20,21]. Oral administration of *Bacillus licheniformis* can increase ruminal digestibility and total volatile fatty acid concentrations in Holstein cows [22] and growth performance in Holstein calves [23]. In vitro inoculation with *Bacillus licheniformis* also improves ruminal fermentation efficiency of forage of various qualities [24]. However, no information is currently available on the effect of *Bacillus licheniformis* on the growth performance of yak calves. Compound enzyme preparations are produced from one or more preparations containing a single enzyme as the main entity, which is mixed or fermented with other single enzyme preparations to form one or more microbial products [25], including saccharylases, amylases, cellulases, proteases, phytases, hemicellulases, and pectinases. Depending on the differences in digestive characteristics and diet composition, specific enzyme preparations can be used for livestock [26]. Specific enzyme complex preparations can degrade multiple feed substrates (antinutrients or nutrients), and different types of enzymes can work synergistically to maximize the nutritional value of feed [27]. In buffalo calves, cellulase and xylanase are more effective with regard to average daily weight gain (ADG) and feed efficiency [28]. Further, the addition of exogenous fibrolytic enzymes to wheat straw has no effect on starter feed intake and increases nutrient digestibility and recumbency, but decreases the ADG of weaned Holstein dairy calves [29]. The effects of probiotics or compound enzyme preparations on the production performance and biochemical blood indexes of calves are not consistent [29,30,31,32,33]. The respective discrepancies may be due to differences in the amounts of added probiotics and exogenous enzymes, the strains of probiotics, diets, and animal management strategies. Therefore, this study was conducted to compare the effects of *Bacillus licheniformis* and a combination of probiotics and enzymes on the growth performance and serum parameters in yak calves, so as to provide a theoretical basis for the application of probiotics in grazing yak calves. ## 2.1. Animals and Treatment This study was performed in accordance with the Chinese Animal Welfare Guidelines, and the experimental protocols were approved by the Animal Care and Ethics Committee of the Institute of Animal Husbandry and Veterinary Medicine, Tibet Academyof Agriculture and Animal Husbandry Science (No. # TAAAHS-2016–27). The feeding trial was conducted at Damxung Co., (Lhasa, China; 30.5° N, 91.1° E) from July to October. The average altitude was 4200 m, the average annual temperature was 1.3 °C, and the average annual precipitation was 456.8 mm. Thirty two-month-old male yaks (38.89 ± 1.45 kg body weight (BW)) were fed milk replacer solution at $3\%$ of their BW every day and were randomly assigned to three dietary supplementation treatments ($$n = 10$$, each), according to BW and age, as follows: T1, supplemented with 0.15 g/kg *Bacillus licheniformis* (2 × 1010 CFU/g); T2, supplemented with a 2.4 g/kg combination of probiotics and enzymes (containing 0.4 g/kg Bacillus licheniformis, 2 × 1010 CFU/g; 1.0 g/kg yeast, 1 × 1010 CFU/g; 1.0 g/kg mixture of xylanase, cellulase, and glucanase in a 1:1:1 ratio, xylanase, 20,000 U/g, cellulase, 1500 U/g, glucanase, 6000 U/g); and a control treatment. The milk replacer, probiotics, and enzyme preparations were provided by the Chinese Academy of Agricultural Sciences (Beijing, China). All yak calves were allowed to graze on an alpine meadow during daytime for the 60-day trial, and they were individually fed milk replacer before and after grazing (0800 and 2000 h, respectively). The forage of the alpine meadow was mainly composed of Kobresia tibetica, and the nutrient composition (dry matter basis) was analyzed in our previous study [34], i.e., $10.4\%$ crude protein, $2.1\%$ ether extract, $67.8\%$ neutral detergent fiber, $34.2\%$ acid detergent fiber, and $4.6\%$ ash. The powdered milk replacer was weighed and mixed with warm water (approximately 40 °C) at a ratio of 1:7 (w/v) to obtain milk replacer solution, according to our previous study [35]. Based on preliminary assessments, the feeding amount of milk replacer was calculated so that all yak calves were able to feed without surplus [35]. The nutrient composition of the milk replacer is shown in Table 1. ## 2.2. Sample Collection and Analysis The BW of each yak calf was recorded before morning feeding on d 0, 30, and 60 using a platform scale, and the ADG was calculated accordingly. The body size indexes of all yak calves were determined using a linen tape at the beginning (d 0) and end (d 60) of the experiment, as previously described [36]. Blood samples (approximately 10 mL) were collected from the jugular vein of the yak calves using a vacuum tube before morning feeding on d 0 and 60. The blood samples were centrifuged at 1100× g for 10 min to obtain serum, which was then aliquoted in 1.5 mL centrifuge tubes and stored at −20 °C. The serum biochemical parameters, including blood urea nitrogen (BUN), globulin (GLB), blood glucose (GLU), and non-esterified fatty acids (NEFAs), were analyzed using an automatic biochemical analyzer 7020 (Hitachi, Tokyo, Japan). Metabolic hormones in the serum, including insulin-like growth factor-1 (IGF-1), epidermal growth factor (EGF), cortisol, insulin (INS), and growth hormone (GH), were determined using commercial ELISA kits (Jiahong Technology Co., Ltd., Beijing, China) according to the manufacturer’s instructions. Briefly, 50 μL of each five-fold diluted serum sample was added to each well of a 96-well ELISA plate. After 30 min of incubation at 37 °C, the plate was washed five times using PBS (Servicebio, Wuhan, China) to remove unbound proteins. Then, 50 μL of HRP-conjugated antibodies was added to allow them to bind with their corresponding antigens. The 3,3′,5,5′-tetramethylbenzidine working solution was added to each well, followed by stop solution. Absorbance was measured using a multi-plate reader (Varioskan LUX, Thermo Fisher Scientific, Waltham, MA, USA) at a wavelength of 450 nm. ## 2.3. Statistical Analysis All experimental data of this study were statistically analyzed using a one-way analysis of variance followed by Duncan’s post hoc test with SPSS 26.0 software (SPSS Inc., Chicago, IL, USA). Each yak calf was considered an experimental unit. Data are expressed as means ± standard error. $p \leq 0.05$ was considered statistically significant. ## 3.1. Body Weight The three treatments did not differ significantly in terms of BW on d 0, 30, and 60 (Table 2). The ADG was higher ($p \leq 0.05$) in the calves under T2 treatment than those under the control treatment, from d 0 to 30, d 30 to 60, and d 0 to 60, and higher ($p \leq 0.05$) than that of those calves under the T1 treatment from d 0 to 60, indicating that the supplementation of *Bacillus licheniformis* and the combination of probiotics and enzymes could improve the growth performance of early-weaned grazing yak calves. The ADG of calves under T1 treatment was higher ($p \leq 0.05$) than that of those under the control treatment from d 0 to 60. ## 3.2. Body Size The body size parameters did not differ significantly among the three treatments on d 0 and 60 (Table 3), indicating that the supplementation of *Bacillus licheniformis* and the combination of probiotics and enzymes did not affect the body size of yak calves within 60 d. ## 3.3. Serum Biochemical Parameters The concentrations of serum GLB, BUN, GLU, and NEFAs did not differ significantly among the three treatments on d 0 and 60 (Table 4). ## 3.4. Serum Hormone As shown in Table 5, the concentrations of serum IGF-1 on d 60 were higher in T2-treated calves than in the T1- and control-treated calves ($p \leq 0.05$, each). The concentrations of serum EGF and GH on d 60 were higher in the T2-treated calves than in the controls ($p \leq 0.05$). The concentration of serum COR on d 60 was higher in the control calves than those under the T1 treatment ($p \leq 0.05$). ## 4. Discussion Early weaning may have various benefits for cows; however, early weaned calves generally perform poorly compared to naturally weaned calves [37]. Early weaned calves without breastfeeding grew at a lower rate and subsequently took longer to reach their target weight than breastfed calves [38]. To improve the growth performance of early-weaned calves, several improvements were made to the composition of milk replacer or additional feeds were added [39,40,41]. Moreover, the addition of probiotics to the diets of calves significantly improved the ADG [29,30,33]. Dietary supplementation with compound enzyme preparations also improved growth performance in weaned piglets [42,43] and growing-finishing pigs [44]. However, previous studies also reported that supplementation with probiotics, yeast cultures or enzymes had no effect on the growth performance of calves [31,32,45]. In the current study, the addition of *Bacillus licheniformis* alone or a complex of probiotics and compound enzyme preparations to the milk replacer significantly improved the performance of grazing yaks and calves compared with milk replacer alone. Further, the addition of probiotics is beneficial for the regulation of the intestinal microbiota community structure, improving intestinal health and fecal consistency, and reducing diarrhea prevalence [19,31,46,47,48]. The supplementation of fibrolytic enzyme to the diet of crossbred calves improved their nutrient digestibility with a positive effect on daily gain [49]. Calves typically exhibit high metabolism and fast growth; however, their growth performance is susceptible to environmental stress and nutrient absorption and digestive problems, especially in the period after weaning [50]. Under natural grazing conditions on the Qinghai–Tibet Plateau, due to the long-term lack of pasture and harsh environmental conditions, the normal growth of yak calves is severely restricted [48]. In the present study, none of the study animals died, which may be attributed to the supplementation with milk replacer. Therefore, the addition of probiotics and compound enzyme preparations was beneficial for the growth of grazing yak calves. In most cases, calf weight is positively correlated with body length, and body length can be used to predict calf live weight [51,52]. Supplementation with *Bacillus subtilis* results in an increased body length and BW in Barki lambs at the third and fourth week, as observed in a four-week continuous feeding trial [53]. In the present study, neither body size nor BW differed among the treatments, which may be due to insufficient trial duration and individual differences in animals. Therefore, more time may be required to elucidate whether the probiotic and compound enzyme preparations affected the calves’ body size. To a certain extent, blood biochemical parameters reflect the metabolism and the acid–base balance of the animal body, and they vary within a certain range [54,55]. The results of the current study revealed that supplementation with *Bacillus licheniformis* and the complex of probiotics and enzyme preparations had no effect on the blood biochemical parameters of grazing yak calves, which is consistent with previously reported results in crossbred and Holstein calves [56,57]. The blood biochemical values of calves vary with the growing stage and are strongly influenced by weaning [58,59], and these possible factors may be stronger than the influence of diet on blood biochemical indicators. Insulin-like growth factors (IGFs) are small polypeptide hormones mainly synthesized and secreted from the liver, and they are structural homologs of insulin, with similar activities. These consist in binding to specific carrier proteins in the blood to form a composite factor that stimulates systemic body growth and has growth-promoting effects on almost every cell in the body [60,61]. As mediators of GH action, the synthesis of IGFs is also affected by the blood level of GH [62]. EGF is a member of the growth factor family, a single polypeptide of 53 amino acid residues that is involved in regulating cell proliferation [63]. We found that the addition of probiotics and a combination of probiotics and enzymes significantly increased the concentration of serum IGF-1, EGF, and GH, whereas supplementation with *Bacillus licheniformis* alone did not achieve this effect. These results are consistent with the ADG results. GH and IGF-1 are important controllers in regulating amino acid metabolism in calves, where GH promotes the entry of amino acids in muscle tissue into cells and increases protein synthesis, and IGF-1 increases protein deposition by promoting protein synthesis [63,64]. Cortisol is commonly used as a marker of stress responses (such as weanling stress) in animals, and it occurs at high serum levels for a period of time after calves are weaned [65]. In line with our results, oral supplementation with probiotics markedly decreases the concentrations of serum cortisol in neonatal and weaned calves [66,67]. Interestingly, we found that the concentrations of serum cortisol were lower in the T1 than in the T2 group, which was, however, not statistically significant. This suggested that the addition of *Bacillus licheniformis* alone may better alleviate weaning stress in grazing yak calves. However, the respective mechanisms remain to be resolved in more detail. A limitation of this study is that the T2 group did not strictly control a single variable compared to the T1 group, and the factors (yeast or xylanase, cellulase and glucanase) that contributed to the difference were unclear. This was due to the initial intention of this study to improve the milk replacer by adding probiotics or compound enzyme preparations, and ultimately promote the growth performance of yak calves on the Qinghai–Tibet Plateau. Further, we were unable to collect data on diarrhea and determine nutrient digestibility in grazing calves, which would have further improved our understanding of the weight gain of yaks under the various treatments. ## 5. Conclusions Our results suggest that supplementation with *Bacillus licheniformis* alone or with a complex of probiotics (*Bacillus licheniformis* and yeast) and compound enzyme preparations (xylanase, cellulase, and glucanase) can improve the ADG of grazing yak calves, and the complex had a better effect on the ADG. The addition of the complexes of probiotics and complex enzyme preparations also increased the concentrations of serum GH, IGF-1, and EGF, which may have led to a higher ADG. Thus, the addition of a combination of probiotics and enzymes to milk replacer may serve as an effective strategy to improve the production of yak calves.
# Growth Performance, Antioxidant and Immunity Capacity Were Significantly Affected by Feeding Fermented Soybean Meal in Juvenile Coho Salmon (Oncorhynchus kisutch) ## Abstract ### Simple Summary Fish meal has been the main aquatic feed protein source for aquaculture. However, global fish meal is lacking, and the price of fish meal continues to rise, which has been unable to meet the needs. Soybean meal is currently recognized as the best choice to replace fish meal in aquatic feed, but soybean meal contains anti-nutritional factors which can affect the health of aquatic animals. Microbial fermentation is a commonly used biological method for treating soybean meal antigens and palatability. In this study, juvenile coho salmon were fed a diet with replaced $10\%$ fish meal protein with fermented soybean meal protein supplementation for 12 weeks. The results indicated that the diet with replaced $10\%$ fish meal protein with fermented soybean meal protein supplementation could significantly ($p \leq 0.05$) influence the expression of superoxide dismutase, catalase, glutathione peroxidase, glutathione S-transferase, nuclear factor erythroid 2-related factor 2, tumor necrosis factor α and interleukin-6 genes, the growth performance, the serum biochemical indices, and the activity of antioxidant and immunity enzymes. ### Abstract This study aims to investigate the effects of partial dietary replacement of fish meal with unfermented and/or fermented soybean meal (fermented by Bacillus cereus) supplemented on the growth performance, whole-body composition, antioxidant and immunity capacity, and their related gene expression of juvenile coho salmon (Oncorhynchus kisutch). Four groups of juveniles (initial weight 159.63 ± 9.54 g) at 6 months of age in triplicate were fed for 12 weeks on four different iso-nitrogen (about $41\%$ dietary protein) and iso-lipid (about $15\%$ dietary lipid) experimental diets. The main results were: Compared with the control diet, the diet with replaced $10\%$ fish meal protein with fermented soybean meal protein supplementation can significantly ($p \leq 0.05$) influence the expression of superoxide dismutase, catalase, glutathione peroxidase, glutathione S-transferase, nuclear factor erythroid 2-related factor 2, tumor necrosis factor α and interleukin-6 genes, the growth performance, the serum biochemical indices, and the activity of antioxidant and immunity enzymes. However, there was no significant effect ($p \leq 0.05$) on the survival rate (SR) and whole-body composition in the juveniles among the experimental groups. In conclusion, the diet with replaced $10\%$ fish meal protein with fermented soybean meal protein supplementation could significantly increase the growth performance, antioxidant and immunity capacity, and their related gene expression of juveniles. ## 1. Introduction Coho salmon (Oncorhynchus kisutch) has become one of the most promising fish in China because of its fast growth rate, high economic value, rich nutrition, containing a variety of minerals, and delicious meat [1,2,3]. At present, the feed needed by the salmon aquaculture industry is mainly fish meal, and fish meal has been the main aquatic feed protein source for aquaculture because of its high protein content, balanced amino acid composition and rich nutrition [4]. However, due to the continuous growth of the modern aquaculture industry, global fish meal is lacking, and the price of fish meal continues to rise, which has been unable to meet the needs [5]. Therefore, it is urgent to find a suitable protein source to replace fish meal in the aquaculture industry. Soybean meal is a plant protein with high digestive protein content, wide source, and low price, so it is currently recognized as the best choice to replace fish meal in aquatic feed [6]. However, the soybean meal contains unbalanced amino acids and soybean antigen protein, urease, trypsin inhibitor, soybean lectin, phytic acid, saponins, phytoestrogens, anti-vitamins and allergens, and other anti-nutritional factors [7,8,9], which can affect the palatability, and inhibit the digestion and absorption of nutrients, and cause the damage of tissue and organ, and seriously affect the health of aquatic animals [10,11]. Microbial fermentation is a commonly used biological method for treating soybean meal antigens and palatability, and soybean meal after microbial fermentation can reduce most of the anti-nutritional factors, produce carbohydrates, digestive enzymes and other nutrients, degradation of macromolecular protein, produce small active peptides, organic acids, thereby enhancing its nutritional value and enhance the digestion and absorption of nutrients [12,13,14]. In addition, fermented soybean meal can also provide animals with probiotics, prebiotics and flavonoids and other active substances [15,16] and increase the antioxidant properties of free amino acid content and the concentration of phenolic compounds [17]. At present, there are relatively few studies on the replacement of fish meal with fermented soybean meal in coho salmon. The antibacterial substances produced by *Bacillus cereus* have the effects of promoting growth, regulating immune function, and treating diseases in livestock and poultry [18]. Therefore, coho salmon was selected as the research object, and *Bacillus cereus* was used as a fermentation strain to explore the effects of replacing part of fish meal with fermented soybean meal on the growth performance, muscle composition, antioxidant and immunity capacity, and their related gene expression of juvenile coho salmon in this study. The results provide a theoretical basis for the development and optimization of coho salmon compound feed and the healthy development of the artificial breeding industry. ## 2.1. Experimental Diets Four different iso-nitrogen (about $41\%$ dietary protein) and iso-lipid (about $15\%$ dietary lipid) experimental diets were designed and based on the references [19,20,21], in which the soybean meal could replace $10\%$ fish meal protein. The G0 diet contained $28\%$ fish meal protein (control group). Three other diets (G1, G2 and G3) were replaced $10\%$ fish meal protein with unfermented and/or fermented soybean meal: The G1 diet replaced by $10\%$ unfermented soybean meal protein, the G2 diet replaced by $5\%$ unfermented soybean meal protein and $5\%$ fermented soybean meal protein, and the G3 diet replaced by $10\%$ fermented soybean meal protein, based on per kg of dried feed, as shown in Table 1. All the feed materials were provided by Conkerun Ocean Technology Co., Ltd. in Shandong, China, and they were animal food-grade. The soybean meal was fermented by Bacillus cereus, and the bacterial strain was collected from mangrove root soil in Maowei Sea, Qinzhou, Guangxi, China (21°81′66″ N, 108°58′46″ E). The experimental strains and fermentation conditions were derived from preliminary experiments in our lab. The inoculation amount of *Bacillus cereus* was $10\%$ (v/m), the ratio of material to water was 1:1.4, and the fermentation was cultured at 37 °C for 60 h. The fermented soybean meal was dried for 24 h in a blast drying baker at 37 °C. A hammer mill was used to grind raw all the dry materials into a fine powder (80-μm mesh), then all the dry materials were mixed in a roller mixer for 15 min and added some water to make a hard dough. Floating pellets with a diameter of 2.0 × 3.0 mm were obtained by a single screw extruder, and they were dried in the air flow at 37 °C until the water content was below 100 g/kg. Then the dry floating pellets were sealed in plastic bags and stored at −20 °C until use. ## 2.2. Experimental Fish and Culture Six hundred juvenile coho salmon at the age of 6 months were from a hatchery located in Benxi rainbow trout breeding farm in Liaoning, China. Outdoor feeding and breeding experiments of juvenile coho salmon were carried out at a rainbow trout breeding farm in Nanfen District, Benxi City, Liaoning, China. After being disinfected using a concentration of $\frac{1}{100}$,000–$\frac{1}{50}$,000 potassium permanganate, the juveniles were acclimatized for 14 days, using water temperature at 10–18 °C, water intake ≥ 100 L/s, surface velocity ≥ 2 cm/s, dissolved O2 ≥ 6.0 mg/L, pH 7.8–8.3 and natural light. The juveniles were fed three times a day at 08:00, 12:00 and 16:00 h, using a control diet ($28\%$ fish meal protein), and the daily feeding quantity was fed until the fish was no feeding behavior at the feeding time. After being acclimatized for 14 days, 390 juvenile coho salmon (initial weight 159.63 ± 9.54 g) were selected for the formal experiment, and 30 of the selected juveniles were freely taken for initial samples. The remaining 360 of them were assigned randomly into 4 groups in triplicate, making a total of 12 net cages (1.0 × 1.0 × 0.8 m, L × W × H) with 30 fish in each net cage. The juveniles were cultured in the same breeding environment, and they were fed for 12 weeks using one of the 4 diets above (Table 1) and the daily feeding quantity was fed until the fish was no feeding behavior at the feeding time. ## 2.3. Sampling The juvenile coho salmon were sampled at day 0 and the end of 12 weeks, respectively, after being starved for 24 h. All sample fish were separately anesthetized using 40 mg/L of 3-aminobenzoic acid ethyl ester methane sultanate (MS-222, Adamas Reagent, China). Then, their body weight and length were individually measured. At day 0, 20 juveniles were taken for dissecting liver samples and the other 10 juveniles for the sampling of whole fish. At the end of 12 weeks, 9 fish per net cage were randomly taken for the samples, 3 of which were for whole fish samples and 6 for the samples of serum, viscera mass, and liver. A sterile syringe was used to collect blood from the tail vein of juvenile coho salmon; then, the blood was transferred to a 2 mL sterile enzyme-free centrifuge tube. At 3000× g and 4 °C, the blood was centrifuged in a centrifuge for 15 min, and the supernatant was serum. The liver weight and visceral mass weight were weighed and recorded separately for analysis of the growth performance. All the experimental samples were stored at −80 °C for subsequent analysis. ## 2.4.1. Growth Performance The survival rate, weight gain rate, specific growth rate, condition factor, hepatosomatic index, viscerosomatic index, feed conversion ratio, and protein efficiency ratio are calculated according to the following formulas. Survival rate (SR, %)=100 ×final amount of fishinital amount of fish Weight gain rate (WGR, %)=100 ×final body weight (g) − initial body weight (g)initial body weight (g) Specific growth rate (SGR, %/d)=100 ×ln(final body weight (g)) − ln(initial body weight (g))days Condition factor (CF, %)=100 × body weight (g)(body length (cm))3 Hepatosomatic index (HSI, %)=100 ×liver weight (g) body weight (g) Viscerosomatic index (VSI, %)=100 ×viscera weight (g) body weight (g) Feed conversion ratio (FCR)=total diets weight (g) final body weight (g) − initial body weight (g) Protein efficiency ratio (PER, %)=100 ×final body weight (g) − initial body weight (g) total intake of crude protein weight (g) ## 2.4.2. Determination of Feed and Whole Fish Composition The compositions of feed and whole fish were analyzed following the standard methods of the Association of Official Analytic Chemists (AOAC, 2005) [22]. The samples were dried at 105 °C until constant weight in an oven to determine moisture content. The muffle furnace at 550 °C for 24 h was used to determine ash. Kjeldahl method was used to determine crude protein. Soxhlet method by ether extraction was used to determine crude lipid. ## 2.4.3. Determination of Serum Biochemical Parameters The indicators in serum were measured using the kit produced by Nanjing Jiancheng Bioengineering Institute (Nanjing, China) and referred to the instructions in the kit for specific operation steps. All the instructions can be found and downloaded at http://www.njjcbio.com (accessed on 1 March 2023). The total protein (TP) content was determined by the Coomassie brilliant blue method. The glucose (GLU) content was determined by the glucose oxidase method. The total cholesterol (T-CHO) content was determined by the cholesterol oxidase (COD-PAP) method. The albumin (ALB) content and alkaline phosphatase (AKP) vitality were determined by the microplate method. ## 2.4.4. Determination of Liver Antioxidant Capacity The indicators in the liver were measured using the kit produced by Nanjing Jiancheng Bioengineering Institute (Nanjing, China) and referred to the instructions in the kit for specific operation steps. All the instructions can be found and downloaded at http://www.njjcbio.com (accessed on 1 March 2023). The superoxide dismutase (SOD) was determined by the water-soluble tetrazole salt (WST-1) method. The catalase (CAT) was determined by the visible light method. The malondialdehyde (MDA) was determined by the thiobarbituric acid (TBA method). The total antioxidant capacity (T-AOC) was determined by the ferric-reducing ability of plasma (FRAP) method. The glutathione peroxidase (GSH-PX), glutathione S-transferase (GST), hydroxyl radical clearance ratio (OH·-CR) and superoxide radical clearance ratio (O2·-CR) were determined by the colorimetric method. The reduced glutathione (GSH) was determined by the microplate method. ## 2.4.5. Expression of Antioxidant and Immunity Genes The method of Ding et al. [ 23] was applied to determine the expression of sod, cat, gsh-px, gst, nrf2, tnf-α and il-6 mRNA in the liver of the juvenile coho salmon. Briefly, the Steady Pure Universal RNA Extraction Kit and the Evo M-MLV reverse transcription kit (Accurate Biology Biotechnology Engineering Ltd., Changsha, China) were used to extract 500 ng of total RNA from samples and reverse-transcribe it into cDNA. The polymerase chain reaction (PCR) conditions were 50 °C for 30 min, 95 °C for 5 min, and 5 °C for 5 min. The forward and reverse primers of sod, cat, gsh-px, gst, nrf2, tnf-α and il-6 genes for reverse transcription were designed by referencing the corresponding genomic sequences of coho salmon in the National Center for Biotechnology Information (NCBI) database. The primers were synthesized by Sangon Biotech (Shanghai) Co., Ltd. (Shanghai, China). The primers were shown in Table 2, and β-actin was chosen as the nonregulated reference gene. The real-time quantitative polymerase chain reaction (RT-qPCR) was conducted using an RT-qPCR System (LightCycler® 96, Roche, Switzerland) and SYBR Green Pro Taq HS qPCR kit (Accurate Biology Biotechnology Engineering Ltd., Changsha, China). The RT-qPCR conditions were as follows: initial denaturation at 95 °C for 30 s, 40 cycles of denaturation at 95 °C for 5 s, annealing at 60 °C for 30 s and extension at 72 °C for 20 s. The 2−ΔΔCT method [24] was applied to calculate the relative expression levels of sod, cat, gsh-px, gst, nrf2, tnf-α and il-6 mRNA. ## 2.5. Statistical Analysis All the data were analyzed using IBM SPSS Statistics 25 (Chicago, IL, USA) and one-way analysis of variance (ANOVA) and tested for normality and homogeneity of variance. Duncan’s test was used for multiple comparison analysis when it was significantly different ($p \leq 0.05$). Statistics are expressed as means ± standard deviation (SD). ## 3.1. Effect of Replacing a Portion of Fish Meal with Unfermented and/or Fermented Soybean Meal on the Growth Performance of Juvenile Coho Salmon The WGR, SGR, CF, and PER of the juveniles in G3 and the HSI, VSI, and FCR of the juveniles in G1 and G2 were significantly higher ($p \leq 0.05$) than those of the juveniles in G0. The HSI, VSI, and FCR of the juveniles in G3 and the WGR, SGR, CF, and PER of the juveniles in G1 and G2 were significantly lower ($p \leq 0.05$) than those of the juveniles in G0. However, there was no significant difference in the SR of the juveniles between the groups ($p \leq 0.05$), as shown in Table 3. ## 3.2. Effect of Replacing a Portion of Fish Meal with Unfermented and/or Fermented Soybean Meal on the Whole-Body Composition of Juvenile Coho Salmon No significant difference ($p \leq 0.05$) was found in the moisture, crude protein, crude lipid, and ash of juvenile coho salmon fed diets of replacement of fish meal with unfermented soybean meal and/or fermented soybean meal, as shown in Table 4. ## 3.3. Effect of Replacing a Portion of Fish Meal with Unfermented and/or Fermented Soybean Meal on the Physiological and Biochemical Indices in Serum of Juvenile Coho Salmon The TP, GLU, ALB, AKP, and T-CHO of the juveniles in G3 were significantly higher ($p \leq 0.05$) than those of the juveniles in G0. The TP, GLU, ALB, AKP, and T-CHO of the juveniles in G1 and G2 were significantly lower ($p \leq 0.05$) than those of the juveniles in G0, as shown in Table 5. ## 3.4. Effect of Replacing a Portion of Fish Meal with Unfermented and/or Fermented Soybean Meal on the Antioxidant Capacity in the Liver of Juvenile Coho Salmon The SOD, CAT, GSH-PX, GSH, GST, OH·-CR, O2·-CR, and T-AOC of the juveniles in G3, and the MDA of the juveniles in G1 and G2 were significantly higher ($p \leq 0.05$) than those of the juveniles in G0. The MDA of the juveniles in G3 and the SOD, CAT, GSH-PX, GSH, GST, OH·-CR, O2·-CR, and T-AOC of the juveniles in G1 and G2 were significantly lower ($p \leq 0.05$) than those of the juveniles in G0, as shown in Table 6. ## 3.5. Effect of Replacing a Portion of Fish Meal with Unfermented and/or Fermented Soybean Meal on the Expression of Antioxidant and Immune Genes in the Liver of Juvenile Coho Salmon The expression of the sod, cat, gsh-px, gst, and nrf2 genes in the liver of the juveniles in G3 and the expression of the il-6 and tnf-α genes in the liver of the juveniles in G1 and G2 were significantly higher ($p \leq 0.05$) than those of the juveniles in G0. The expression of the il-6 and tnf-α genes in the liver of the juveniles in G3 and the expression of sod, cat, gsh-px, gst, and nrf2 genes in the liver of the juveniles in G1 and G2 were significantly lower ($p \leq 0.05$) than those of the juveniles in G0, as shown in Figure 1. ## 4. Discussion The growth performance of fish can be used to reflect growth and health status, and it is affected by many factors, such as fish species, growth stage, nutrient deficiency, metabolic disorders, anti-nutritional factors, and toxic and harmful substances [25]. The results of this study showed that partial replacement of fish meal with fermented soybean meal could significantly increase the growth performance of juvenile coho salmon. However, partial replacement of fish meal with unfermented soybean meal could significantly decrease the growth performance of juvenile coho salmon. The reasons are supposed to be: First, unfermented soybean meal had adverse factors such as poor palatability, essential amino acid imbalance, low phosphorus utilization, high anti-nutritional factors, and easily cause lipid metabolism disorder, which will lead to decreased growth performance [26]. Second, fermented soybean meal could reduce and even eliminate anti-nutrient factors, and the protein could be degraded into easily digestible peptides or amino acids; thus, fermented soybean meal could improve the nutritional quality of feed and the digestibility of fish [27]. Third, the active bacteria, organic acids, and vitamins in fermented soybean meal would also play a positive role in growth performance [28]. Similar studies had shown that feeding largemouth bass (Micropterus salmoides) [21] and Macrobrachium nipponense (Macrobrachium nipponense) [29] with the diet with partial replacement of fish meal with fermented soybean meal significantly improved their growth performance. Serum biochemical indexes of fish are closely related to metabolism, nutrient absorption, and health status. They are important indexes to evaluate physiology and pathology and are widely used to measure metabolism and health status [30,31]. TP and ALB in the blood are synthesized by the liver, and the increase of TP and ALB content indicates that the ability of the liver to synthesize protein is enhanced. AKP is one of the important indicators of fish physiological activity and disease diagnosis, which can reflect the anti-stress ability of biological organisms [32]. T-CHO is an important index to reflect the body’s lipid metabolism [33]. GLU is the main functional substance of the body, and its content is affected by nutrition and feed intake [34]. The results of this study showed that partial replacement of fish meal with fermented soybean meal could significantly increase the serum biochemical indexes of juvenile coho salmon, indicating that fermented soybean meal could be used as a protein substitute for fish meal to improve the health of juvenile coho salmon. The reasons are supposed to be: First, fermented soybean meal could improve the intestinal structure and function of fish, increase the activity of digestive enzymes, and increase the absorption and utilization of dietary proteins and lipids [35]. Second, compared with macromolecular proteins, the small peptides in fermented soybean meal are more easily absorbed by fish, which could improve the diet protein utilization rate, consequently enhancing the serum protein content of fish [12]. Third, fermented soybean meal could decrease the content of soybean saponins, increase the activity of α-glucosidase, and improve the absorption of glucose [36]. Fourth, fermented soybean meal could not only reduce the inhibitory effect of soy isoflavones on serum T-CHO levels but also stimulate the antioxidant system of the body, thereby inhibiting the process of lipid oxidation and increasing the content of T-CHO in the serum [37]. In addition, bioactive peptides during fermentation can act as immune stimulants to enhance AKP activity [38]. Nuclear factor erythroid 2-related factors (nrf2) is an important nuclear transcription factor and can be involved in a variety of cellular processes, including maintaining intracellular redox balance, cell proliferation/differentiation, metabolism, protein homeostasis and inflammation regulation, and disease development [39,40]. The activation of the nrf2 signaling pathway can initiate the expression of multiple downstream target proteins, such as SOD, CAT, GPX, glutathione ligase (γ-GCS), glutathione catalase (GR), glutathione S-transferase (GST) and glucose-6-phosphate kinase (G-6-PDH) [41]. The expression of these genes is an important way for the body to resist oxidative stress damage [42]. Nrf2 signaling pathway can negatively regulate various cytokines (TNF-α, IL-1 and IL-6), chemokines, cell adhesion factors, matrix metalloproteinases, cyclooxygenase-2, inducible nitric oxide synthase, and other inflammatory mediators, which plays a protective role in the dysfunction caused by inflammation [43]. IL-6 and TNF-α are often used as indicators of the inflammatory response [44]. MDA content has been used by many researchers to evaluate the effect of protein replacement sources on the antioxidant capacity of fish, which can be used as an important marker of endogenous oxidative damage in organisms [45]. The results of this study showed that partial replacement of fish meal with fermented soybean meal could significantly increase the antioxidant capacity and the expression of their related gene in the liver and significantly decrease the expression of il-6 and tnf-α gene in the liver of juvenile coho salmon. However, partial replacement of fish meal with unfermented soybean meal could significantly decrease the antioxidant capacity and the expression of their related gene in the liver and significantly increase the expression of the il-6 and tnf-α genes in the liver of juvenile coho salmon. The reasons are supposed to be: First, the soybean globulin and β-conglycinin in soybean meal could destroy the antioxidant system of fish and cause oxidative damage [46]. Previous studies have shown that soybean meal in feed may cause oxidative stress in fish such as gilthead sea bream (Sparus aurata) [47]. Second, a high concentration of soybean peptides and phenols in fermented soybean meal could up-regulate nrf2 gene expression, induce the expression of the sod, cat, gsh, and gsh-px genes, and improve the antioxidant ability of the body [48,49]. Lee et al. found that an appropriate proportion of fermented soybean meal in a diet can increase the activities of SOD, GSH-Px, and GSH in the liver [50]. Third, *Bacillus could* stimulate the production of antioxidant enzymes and antioxidants, thereby scavenging free radicals, maintaining homeostasis, improving antioxidant capacity, and activating the Nrf2 pathway [51]. Fourth, the replacement of fish meal protein with $10\%$ fermented soybean meal protein was insufficient for causing a change in the body’s ability to recognize foreign bodies and did not lead to an inflammatory reaction [52]. In addition, after soybean meal fermentation, a unique fragrance could be formed, which can promote the feeding of aquatic animals and increase their immunity [53]. However, the results of this study showed that partial replacement of fish meal with unfermented and/or fermented soybean meal had no significant effect on the survival rate and whole-body composition of juvenile coho salmon. The reasons are supposed to be: First, the energy required by fish to maintain normal life activities mainly depends on the breakdown of protein and fat, and fish meal contains a complete set of essential amino acids that meet the protein requirements of most aquatic animals [54,55]. Second, the crude protein and crude fat contents of the four diets in this study were the same and were enough to satisfy the daily needs of juvenile coho salmon. Third, fish body composition is affected by external conditions such as feed nutrients, food composition, aquaculture water environment and season, but fish body composition was not affected by plant protein levels [56]. Similar results were obtained in pompano (Trachinotus ovatus) [53] and Florida pompano (Trachinotus carolinus) [56] fed with fermented soybean meal partially replacing fish meal. However, studies have shown that a high proportion of fermented soybean meal instead of fish meal significantly increased the whole-body moisture and reduced crude protein and crude lipid content of Japanese seabass (Lateolabrax japonicus) [57]. In giant grouper (Epinephelus lanceolatus), high levels of fermented soybean meal replacement also significantly increased whole-fish moisture and decreased crude protein and crude lipid content [58]. The above inconsistent results might be related to the strains of fermented soybean meal, the basic feed formula, the substitution ratio of fermented soybean meal, the types of aquatic animals, the breeding cycle, and the growth stage. ## 5. Conclusions In conclusion, the diet with replaced $10\%$ fish meal protein with fermented soybean meal protein supplementation can significantly influence the expression of superoxide dismutase, catalase, glutathione peroxidase, glutathione S-transferase, nuclear factor erythroid 2-related factor 2, tumor necrosis factor α and interleukin-6 genes, the growth performance, the serum biochemical indices, and the activity of antioxidant and immunity enzymes of juvenile coho salmon. The results provide a theoretical basis for the development and optimization of coho salmon compound feed and the healthy development of the artificial breeding industry.
# Association of Computed Tomography Measures of Muscle and Adipose Tissue and Progressive Changes throughout Treatment with Clinical Endpoints in Patients with Advanced Lung Cancer Treated with Immune Checkpoint Inhibitors ## Abstract ### Simple Summary The impact of sarcopenia (i.e., progressive and generalised loss of skeletal muscle mass) and obesity on survival are substantially investigated in cancer patients. However, the relationship between sarcopenia and mortality is quite unclear in patients with lung cancer treated with immunotherapy, while the prognostic value of obesity remains controversial. These issues are potentially related to the obesity paradox and lack of precise measures of body composition on survival. As a result, we aimed to explore the associations between measures of skeletal muscle mass and adiposity (i.e., intramuscular, visceral and subcutaneous adipose tissue) and changes during treatment with disease progression and overall survival in patients with advanced lung cancer receiving immunotherapy. Our results demonstrated that rather than sarcopenia, higher intramuscular and subcutaneous adipose tissue are associated with better prognosis during immunotherapy. These findings are of great importance for clinical practice and may inform specific and tailored therapies to improve immunotherapy prognosis. ### Abstract To investigate the association between skeletal muscle mass and adiposity measures with disease-free progression (DFS) and overall survival (OS) in patients with advanced lung cancer receiving immunotherapy, we retrospectively analysed 97 patients (age: 67.5 ± 10.2 years) with lung cancer who were treated with immunotherapy between March 2014 and June 2019. From computed tomography scans, we assessed the radiological measures of skeletal muscle mass, and intramuscular, subcutaneous and visceral adipose tissue at the third lumbar vertebra. Patients were divided into two groups based on specific or median values at baseline and changes throughout treatment. A total number of 96 patients ($99.0\%$) had disease progression (median of 11.3 months) and died (median of 15.4 months) during follow-up. Increases of $10\%$ in intramuscular adipose tissue were significantly associated with DFS (HR: 0.60, $95\%$ CI: 0.38 to 0.95) and OS (HR: 0.60, $95\%$ CI: 0.37 to 0.95), while increases of $10\%$ in subcutaneous adipose tissue were associated with DFS (HR: 0.59, $95\%$ CI: 0.36 to 0.95). These results indicate that, although muscle mass and visceral adipose tissue were not associated with DFS or OS, changes in intramuscular and subcutaneous adipose tissue can predict immunotherapy clinical outcomes in patients with advanced lung cancer. ## 1. Introduction In recent years, immune checkpoint inhibitors (ICIs) or immunotherapies, such as nivolumab, pembrolizumab and ipilimumab, have evolved rapidly in medical oncology. The utilisation of ICIs has become a key component for managing a variety of malignancies including lung cancer, resulting in an unprecedented survival advantage over standard therapies such as radiation therapy and chemotherapy. While chemotherapy acts directly on cancer cells inhibiting the cell cycle, ICIs are antibodies targeting the programmed death 1 (PD-1), programmed death-ligand (PD-L1) or cytotoxic T-lymphocyte-associated protein 4 (CTLA-4), blocking key regulatory signals that dampen immune responses in the tumour microenvironment. As a result, ICIs counteract immune suppression allowing for tumour reactive T cells to mount an antitumour response utilising the patient’s immune system to target the malignancy [1]. These therapies have shown promising effects in the treatment of lung cancer, as well as a selection of other solid tumours and haematologic malignancies [2,3,4,5]. Several studies have pointed out a significant relationship between immunotherapy and multiple variables on overall survival. Among potential factors, sarcopenia (i.e., progressive and generalised loss of skeletal muscle mass [6]) has emerged as an important prognostic factor in different groups of cancer patients [7]. However, the relationship between sarcopenia and overall survival in patients treated with immunotherapy is still unclear [8,9]. While studies present a significant association between sarcopenia and shorter overall survival [8], others have no significant relationship [9]. For example, in a previous study with small-cell lung cancer patients receiving salvage anti-PD-1 immunotherapy ($$n = 105$$), patients presenting with low levels of muscle mass (i.e., sarcopenic patients) had a ~$200\%$ greater risk of all-cause mortality compared to those with higher levels of muscle mass [8]. In contrast, there was no difference in overall survival between sarcopenic and non-sarcopenic patients with solid metastatic tumours treated with ICIs ($$n = 261$$) [9]. Moreover, the prognostic value of obesity in various malignancies is unknown and remains controversial for the survival of various malignancies [10]. Although previous studies indicated a potential association between body mass index (BMI) and overall survival in advanced cancer patients treated with immunotherapy [11,12], others have demonstrated no significant association between BMI and clinical endpoints [13]. These conflicting results, potentially related to the obesity paradox (i.e., inconsistency concerning the role of obesity on survival), preclude us from understanding the role of fat mass components (i.e., visceral adipose tissue or subcutaneous adipose tissue) on survival in this population [14,15]. For example, while visceral adipose tissue (VAT) secretes various cytokines and cytokine-like factors, which potentially enhance cancer progression [16,17], derived factors from the subcutaneous adipose tissue can increase insulin sensitivity and lipid metabolism potentially resulting in an improved survival [18]. Therefore, although BMI is a much simpler and widely used tool in clinical practice, it does not reflect individual components of body weight such as fat distribution or muscle quantity and quality. As a result, this study aims to investigate the associations between measures of skeletal muscle mass, intramuscular adipose tissue, subcutaneous adipose tissue, visceral adipose tissue and visceral-to-subcutaneous adipose tissue index and changes throughout treatment with disease progression and overall survival in patients with advanced lung cancer receiving immunotherapy. ## 2.1. Study Population Retrospective analyses of computerised tomography (CT) imaging and electronic medical record data were performed for all patients treated with immunotherapy who presented to Fiona Stanley Hospital, Western Australia between March 2014 and June 2019. A total of 124 patients with lung cancer were identified on immunotherapy. Patients without CT imaging data were excluded from the final cohort, resulting in a total of 97 patients included for further analyses. Demographic, pathological and survival information were obtained via electronic medical record review. The duration of follow-up was 60 months from the first presentation to the date of death for deceased patients or the date of last documented encounter for surviving patients. Demographic and clinical data such as sex, age, BMI, smoking habits, Eastern Cooperative Oncology Group (ECOG) performance status (PS), distant metastases, cancer type, treatment regimens, progression-free survival (PFS) and overall survival (OS) were collected by self-report and medical records, respectively. Our study was approved by the Hospital Ethics Committee (RGS0000003289) and conducted in compliance with the Helsinki Declaration. ## 2.2. Assessment of Muscle Mass and Fat Mass Parameters CT scans were at a median of 20 [interquartile range (IQR): 8 to 31] days before commencing immunotherapy treatment. CT scans of the abdomen/pelvis were performed as part of recommended staging pathway and retrieved from the hospital imaging PACS/RIS system (version 6.7.0.6011; Agfa, Mortsel, Belgium). A single 3 mm axial slice through the middle of the L3 vertebral body was retrieved using the sagittal reformatted images with the morphologic L5/S1 junction as reference. These images were imported into SliceOmatic (version 5.0 Rev 12; TomoVision, Magog, QC, Canada) and analysed using the ABACS mode (version 6 Rev-7b; Voronoi Health Analytics, Coquitlam, BC, Canada). If there was an artifact at this level, the nearest artifact-free contiguous slice above or below this level was utilised. A visual colour-coded overlay was reviewed to assess for correct segmentation; any errors were manually corrected using Edit mode and following standard anatomic boundaries. Area measurements (cm2) were obtained by auto-segmentation using the default Hounsfield unit (HU) thresholds and skeletal muscle was determined in the range of −29 to 150 HU, including the skeletal muscle compartment of psoas, paraspinal and abdominal wall musculature. Intramuscular adipose tissue (IMAT) was determined in the range of −190 to −30 HU, visceral adipose tissue (VAT) in the range of −150 to −50 HU and subcutaneous adipose tissue (SAT) in the range of −190 to −30 HU. Visceral-to-subcutaneous adipose tissue ratio was defined as the ratio between VAT and SAT values. Values were normalised to height squared (m2) to derive skeletal muscle, IMAT, VAT, SAT and VAT/SAT indexes. For further analysis, the skeletal muscle index was analysed as a categorical variable with two levels corresponding to sarcopenia (skeletal muscle index < 43 cm2·m−2 and BMI < 25 kg·m−2, or skeletal muscle index < 53 cm2·m−2 and BMI ≥ 25 kg·m−2) and non-sarcopenia (skeletal muscle index ≥ 43 cm2·m−2 and BMI ≥ 25 kg·m−2, or skeletal muscle index ≥ 53 cm2·m−2 and BMI < 25 kg·m−2), as previously established [19]. Considering the lack of cut-off values for adiposity measures, median values based on our sample were used to categorise patients with higher and lower levels of IMAT, VAT, SAT and VAT/SAT indexes. Relative changes (%) were calculated as indexfollow−upindexbaseline∗$100\%$, with a threshold of $10\%$ utilised to categorise groups with the lowest and highest index changes throughout treatment. ## 2.3. Assessment of Outcomes The primary outcome was overall survival, defined as deaths as a result of any cause, while disease progression defined as an increase in the size of the tumour by $20\%$ was secondary. Vital causes and causes of death were obtained via electronic medical record review. Follow-up time for overall mortality was calculated as the time from CT scans to death from any cause or the end of follow-up (i.e., 60 months following the time of the first scan). ## 2.4. Statistical Analyses Analyses were performed using SPSS v.27 (Armonk, IBM Corp., NY, USA) and R Core Team [2013]. Differences in overall mortality between groups based on sarcopenia, IMAT, SAT, VAT and VAT/SAT variables were assessed using the Kaplan–Meier method and the log-rank test. Paired-sample t-test was used to compare values between the first and second CT scans during immunotherapy. The hazard ratios (HRs) for the associations of skeletal muscle index, IMAT, SAT, VAT and VAT/SAT ratio indexes with overall mortality and disease progression were estimated in separate models using Cox proportional hazards regression. Logistic regression was used to determine the impact of body composition components on the occurrence of adverse events ≥ grade 2. Odds ratios (ORs) and $95\%$ CIs were reported. Models were adjusted for age, BMI, cancer type and stage. A p-value of ≤0.05 was considered statistically significant and point estimates were presented with $95\%$ confidence interval. ## 3.1. Patient Characteristics Patient characteristics are presented in Table 1. Patients were 67.5 ± 10.2 years of age (mean ± standard deviation) with a BMI of 26.1 ± 4.9 kg·m−2. Most patients were overweight/obese ($60.8\%$). The majority of patients had adenocarcinoma ($62.9\%$), followed by squamous cell carcinoma ($29.9\%$). Most patients were treated with second line immunotherapy ($75.3\%$). A total of 81 patients were stage IV ($84.4\%$) and had metastatic disease present in more than two sites ($22.9\%$), bone ($17.1\%$), lymph node ($8.6\%$), liver ($5.7\%$), adrenal ($2.9\%$) and brain ($2.9\%$). In this cohort, the most common immunotherapy agent was Nivolumab ($58.8\%$), followed by Pembrolizumab ($24.7\%$) and Atezolumab ($16.5\%$). A total number of 96 patients had disease progression and died during follow-up ($99.0\%$), with median disease progression of 11.3 (IQR: 4.9 to 20.4) months and 15.4 (IQR: 7.2 to 24.0) months, respectively. ## 3.2. Association of Body Composition Components with Disease Progression and Overall Survival The median IMAT, SAT, VAT and VAT/SAT ratio index values were 3.85, 55.43, 41.90 and 0.74 cm2·m−2, respectively. Multivariable models indicated no significant associations of sarcopenia, IMAT, SAT, VAT and VAT/SAT ratio indexes at baseline with 5-year disease progression (HR: 0.69–1.25, $$p \leq 0.199$$–0.877) and 5-year overall survival (HR: 0.69–1.34, $$p \leq 0.123$$–0.724) in patients with advanced lung cancer undergoing immunotherapy (Table 2). Kaplan–Meier analyses stratifying patients according to body composition components cut-off values on 5-year disease progression and overall survival are presented in Figure 1 and Figure 2, respectively ($$p \leq 0.061$$–0.606). A second CT scan was performed in 88 patients as presented in Table 3. Changes in skeletal muscle, IMAT, SAT, VAT and VAT/SAT ratio indexes were not statistically significant following a median time of 15.4 months after the first CT scan (IQR: 7.1 to 26.5 days). Although changes in sarcopenia, VAT and VAT/SAT ratio indexes were not associated with 5-year disease progression (HR: 0.63–1.24, $$p \leq 0.064$$–0.484), >$10\%$ increases in IMAT (HR: 0.60, $95\%$ CI: 0.38 to 0.95) and SAT indexes (HR: 0.59, $95\%$ CI: 0.36 to 0.95) were associated with improved 5-year disease progression ($$p \leq 0.028$$ and 0.029; Table 4). Patients with a >$10\%$ increase in IMAT index presented a median disease progression of 15.9 (IQR: 8.8 to 24.6) months vs. 11.7 (IQR: 5.5 to 19.0) months in patients with a ≤$10\%$ decrease in IMAT index (Kaplan–Meier Log-Rank, χ2 = 4.2, $$p \leq 0.042$$). Likewise, patients who had a >$10\%$ increase in SAT index presented a median disease progression of 16.9 (IQR: 10.8 to 29.6) months vs. 10.2 (IQR: 4.7 to 18.8) months of patients who had a decrease in SAT index (Kaplan–Meier Log-Rank, χ2 = 5.3, $$p \leq 0.022$$). Kaplan–*Meier analysis* on 5-year disease progression is presented in Figure 3. Regarding overall survival, a >$10\%$ increase in IMAT was associated with improved 5-year overall survival (HR: 0.60, $95\%$ CI: 0.37 to 0.95, $$p \leq 0.031$$; Table 5). Patients who had an increase of $10\%$ in IMAT presented a median overall survival of 17.8 (IQR: 9.6 to 27.9) months vs. 15.5 (IQR: 8.5 to 23.2) months of patients who had a decrease in this outcome (Kaplan–Meier Log-Rank, χ2 = 3.4, $$p \leq 0.067$$). Kaplan–*Meier analysis* on 5-year overall survival is presented in Figure 4. ## 3.3. Association of Body Composition Components with Immune-Related Adverse Events Thirty-six adverse events ($43.4\%$) were observed during immunotherapy. Of these, a total of 11 grade 2 ($13.3\%$) and 5 grade 3 events ($6.0\%$) were observed. No associations were observed between sarcopenia, IMAT, SAT, VAT and VAT/SAT ratio indexes with high-grade adverse events during immunotherapy (OR: 0.95–2.00, $$p \leq 0.279$$–0.947). ## 4. Discussion The present study reported the associations between radiological measures of muscle and adipose tissue with disease progression and overall survival in patients with advanced lung cancer receiving immunotherapy. The main findings were: (i) muscle mass index at the time of or during immunotherapy was not associated with disease progression or overall survival; and (ii) patients with lung cancer presenting with increases of $10\%$ in intramuscular and subcutaneous adipose tissue following treatment were at a ~$40\%$ decreased risk of disease progression and overall survival compared to those presenting with lower levels, regardless of age, BMI, cancer type and stage. The significant association of sarcopenia with poor disease prognosis has been observed in several papers across different types of cancer [20,21,22]. Interestingly, the majority of studies reporting such findings in the field of immunotherapy were undertaken in patients with lung cancer [8,20,23,24,25,26]. As far as we know, this is one of the few studies [25] undertaken in patients with lung cancer mainly with adenocarcinoma and squamous cell carcinoma (~$93\%$ of the sample). Our study indicates that sarcopenia is not significantly associated with disease progression or overall survival in this population with advanced cancer receiving immunotherapy. As observed in our results, the presence of sarcopenia at the start of immunotherapy or a reduction of $10\%$ in skeletal muscle mass index were not associated with disease progression and mortality. However, this result disagrees with previous studies undertaken in patients mainly with non-squamous lung cancer [8,26], which indicate that tumour histology may affect the interaction between sarcopenia and immunotherapy in patients with advanced lung cancer. Nevertheless, lower levels of muscle mass may still affect other important components of immunotherapy such as inflammation, cachexia and physical disability. Consequently, more research is required to elucidate the importance of sarcopenia for other important clinical measures. The investigation of obesity in immunotherapy is challenging given the confounding factors associated with the obesity paradox [15] and its role in cancer dynamics [27,28]. We observed that intramuscular and subcutaneous adipose tissue could be a predictive marker for improved survival when increased throughout the treatment course. The subcutaneous adipose tissue derives a range of factors such as leptin that could act to improve insulin sensitivity and lipid metabolism [17,18,29]. As a result, this could potentially increase overall survival in this group of patients and represent an important measure during the cancer survivorship. However, the result that an increase in intramuscular adipose tissue could improve survival was unexpected. While previous studies identified a significant association of intramuscular adipose tissue with shorter survival in women with non-metastatic breast cancer [30] and men with hormone-sensitive prostate cancer [31], others did not observe a significant association in metastatic breast cancer [32] or advanced non-small-cell lung cancer treated with immunotherapy [33,34]. Moreover, previous studies have demonstrated that increased intramuscular fat is related to increased frailty and sarcopenia [35] and impaired physical function [36]. In addition, others indicate that increased intramuscular fat is associated with poor survival and increased risk of hospitalisation in older adults or critically ill patients [37,38]. Therefore, the interaction between intramuscular fat and immunotherapy is yet to be determined in this setting. Interestingly, we also observed an unexpectedly longer 5-year disease progression compared to other large immunotherapy randomised controlled studies [39,40,41,42]. While we observed a median disease progression time of 11.3 months, a range of 3.0 to 5.0 months was reported in these trials [39,40,41,42]. The reasons are likely multifactorial and related to our smaller sample size and retrospective nature compared to these large randomised controlled trials [39,40,41,42]. Additionally, we observed high PDL$1\%$ values in our sample (median of $60\%$) and this may also account to a long disease progression as PDL$1\%$ is associated with improved survival even when using monotherapy agents in advanced non-small-cell lung cancer. Other factors such as mixed cancer stages (~$16\%$ stage III) and treatment line (~$25\%$ first treatment line) are different than these previous immunotherapy trials [39,40,41,42] and may affect disease progression. Our cohort also presented more favourable histology (i.e., adenocarcinoma) and tumour burden may be different as $40\%$ did not present distant metastasis. These factors may play a role in disease progression. Some limitations are worthy of comment. The retrospective nature of the study and the heterogeneity of CT scans may limit our ability to extrapolate our findings to a large scale. Future studies should undertake prospective models to assess the influence of body composition changes on clinical endpoints, as well as reporting the time of body composition assessment. In addition, the lack of standardisation (i.e., cut-off values), and this is due to variability in underlying technique without clear standardisation, makes comparison difficult to assess radiological measures of muscle and adipose tissue and affects our ability to provide more meaningful recommendations based on our findings. Although the use of body composition is promising, critical and technical studies are required to understand the relationship of sarcopenia with clinical endpoints and to inform specific and tailored interventions in patients treated with immunotherapy. Finally, we could not estimate the impact of sarcopenic obesity in our sample. This is an emergent topic in oncology given the high risk of mortality and severe complications experienced by patients during systemic and surgical cancer treatments. Future studies are required to investigate the impact of sarcopenic obesity in lung cancer patients during immunotherapy and identify clinical management strategies for this population. ## 5. Conclusions In conclusion, our findings are that rather than muscle mass and visceral adipose tissue, changes in intramuscular and subcutaneous adipose tissue can predict immunotherapy clinical outcomes regardless of age, BMI, cancer type and stage. This result provides new insights into the assessment of body composition in patients with advanced lung cancer undergoing immunotherapy. Consequently, future research should seek to assess a larger sample size of patients undergoing immunotherapy to further elucidate the influence of body composition, specifically monitoring intramuscular and subcutaneous adipose tissues.
# Comparison of Fecal Microbiota Communities between Primiparous and Multiparous Cows during Non-Pregnancy and Pregnancy ## Abstract ### Simple Summary An imbalance of the gut microbiota composition may lead to several reproductive disorders and physiological diseases during pregnancy. This study investigates the fecal microbiome composition between primiparous and multiparous cows during non-pregnancy and pregnancy to analyze the host-microbial balance at different stages. The results indicate that host-microbial interactions promote adaptation to pregnancy and will benefit the development of probiotics or fecal transplantation for treating dysbiosis and preventing disease development during pregnancy. ### Abstract Imbalances in the gut microbiota composition may lead to several reproductive disorders and diseases during pregnancy. This study investigates the fecal microbiome composition between primiparous and multiparous cows during non-pregnancy and pregnancy to analyze the host-microbial balance at different stages. The fecal samples obtained from six cows before their first pregnancy (BG), six cows during their first pregnancy (FT), six open cows with more than three lactations (DCNP), and six pregnant cows with more than three lactations (DCP) were subjected to 16S rRNA sequencing, and a differential analysis of the fecal microbiota composition was performed. The three most abundant phyla in fecal microbiota were Firmicutes ($48.68\%$), Bacteroidetes ($34.45\%$), and Euryarchaeota ($15.42\%$). There are 11 genera with more than $1.0\%$ abundance at the genus level. Both alpha diversity and beta diversity showed significant differences among the four groups ($p \leq 0.05$). Further, primiparous women were associated with a profound alteration of the fecal microbiota. The most representative taxa included Rikenellaceae_RC9_gut_group, Prevotellaceae_UCG_003, Christensenellaceae_R_7_group, Ruminococcaceae UCG-005, Ruminococcaceae UCG-013, Ruminococcaceae UCG-014, Methanobrevibacter, and [Eubacterium] coprostanoligenes group, which were associated with energy metabolism and inflammation. The findings indicate that host-microbial interactions promote adaptation to pregnancy and will benefit the development of probiotics or fecal transplantation for treating dysbiosis and preventing disease development during pregnancy. ## 1. Introduction Pregnancy is a wonderful and complex physiological process. In order to adapt to the growth and development of the fetus, drastic changes occur in maternal hormones, immunity, and metabolism before and after pregnancy. For mammals, progesterone (P4), estradioal (E2), follicle stimulating hormone (FSH), luteinizing hormone (LH), and Prolactin (PRL) are the main reproductive hormones to maintain and evaluate maternal pregnancy [1]. Growth hormone, thyroid hormone, and sex hormones could also change with maternal pregnancy [2]. The maternal immune system undergoes significant adaptations during pregnancy to avoid harmful immune responses against the fetus and to protect the mother and her future baby from pathogens [3]. For example, the number of T cells during pregnancy is lower than before pregnancy [4]. More nutrients are needed to be stored and consumed during pregnancy to meet the nutritional demands of the mother and fetus. Maternal metabolism changes to meet the nutritional requirements during pregnancy, the most obvious being the decrease in insulin sensitivity [5,6]. Additionally, compared to multiparous women, primiparous women have more exaggerated physiological responses, resulting in higher weight gain and body fat gain than that of multiparous women during pregnancy [7]. There are also many differences between primiparous and multiparous cows, including productivity, reproductive ability, energy balance, immune, metabolic, and hormonal responses [8,9]. Gut microbiota can produce a variety of nutrients, such as amino acids, fatty acids, and vitamins, which play an important role in regulating host metabolism, energy balance, and immune response [10,11,12,13]. With the changes of maternal hormones, immunity and metabolism during pregnancy, the composition and abundance of gut microbiota also shifted. The relative abundance of 21 genera of gut microbiota showed significant differences between non-pregnant and pregnant mice fed a standard diet. There were 4 abundant genera (present at greater than $1\%$) significantly increased and 5 rare taxa (present at lower than $0.5\%$) reduced during pregnancy compared to non-pregnant mice [14]. For dairy cows, the fecal microbial communities change dramatically in bacterial abundance at different taxonomic levels among the 12 distinctly defined production stages in a modern dairy farm, especially between virgin cows and parous cows [13]. Information on host-microbial interactions during pregnancy is emerging [15]. Recent studies showed that gut microbiota can impact the synthesis and metabolism of a variety of substances during pregnancy, regulating body weight, blood pressure, blood sugar, blood lipids, and other physiological indexes, and even leading to some pregnancy complications [16,17,18]. Parity has also been identified as one of the key determinants of the maternal microbiome during pregnancy. The difference in microbiome trajectories among different parities was significant in sows, with the greatest difference between zero parity and low parity animals. It was suggested that there are dramatic differences in the microbial trajectories of primiparous and multiparous animals [19]. Compared to multiparous sows, primiparous sows had a lower gut microbiota richness and evenness during the periparturient period [20]. Primiparous cows have different uterine and rumen microbiome compositions compared to multiparous cows [21,22]. However, it is still unclear if parity impacts the maternal cow’s gut microbiome during both non-pregnancy and pregnancy. In this study, the gut microbiome composition was investigated in fecal samples from primiparous and multiparous cows during non-pregnancy and pregnancy. It confirmed that there is an inherent shift in gut microbiota associated with pregnancy and differences in gut microbiota composition between primiparous and multiparous animals. The results will help develop strategies to improve the reproductive management of cows. ## 2.1. Ethics Statement The collection of biological samples and experimental procedures carried out in this study were approved by the Institutional Animal Care and Use Committee in the College of Animal Science and Technology, Sichuan Agricultural University, China (DKY20210306). ## 2.2. Sample Collection A total of 24 healthy Holstein cows were selected from one dairy herd under the same conditions in southwestern China, with the same feeding processes, similar body conditions, and similar body weight. According to their reproductive stages, the cows were divided into four groups: the cows before their first pregnancy (13 months, $$n = 6$$, BG); at their first pregnancy (the 4th month of pregnancy, 18 months, $$n = 6$$, FT); open cows with more than three lactations (30 days after parturition, 57 months, $$n = 6$$, DCNP); and pregnancy cows with more than three lactations (the 4th month of pregnancy, 60 months, $$n = 6$$, DCP). Animals were fed the total mixed ration (TMR) made according to NRC [2012] with the same feed raw material. None of the cows had received antibiotics in the last 3 months. All 24 fecal samples were obtained once from cow rectum content on the same day, transferred to separate sterilized 2 mL tubes, and stored immediately in liquid nitrogen. All samples were then transported to the laboratory and stored at −80 °C for further analysis. ## 2.3. DNA Extraction, PCR Amplification and Gene Sequencing Total genome DNA was extracted from fecal samples, the negative control (DNA free water), and the positive control (16S Universal E29), using a BIOMICS DNA Microprep Kit (Zymo Research, D4301, Irvine, CA, USA) according to the manufacturer’s instructions. DNA concentration and purity were tested on $0.8\%$ agarose gels. DNA yield was detected with a Tecan Infinite 200 PRO fluorescent reader (Tecan Systems Inc., San Jose, CA, USA). The 16S rRNA amplification covering the variable region V4-V5 was carried out using the primers 338F (5′-ACTCCTACGGGAGGCAGCAG-3′) and 915R (5′-GTGCTCCCCCGCCAATTCCT-3′) by a Thermal Cycler PCR system (Gene Amp 9700, ABI, Foster City, CA, USA). PCRs were performed in triplicate in a 25 µL mixture. The PCR products were diluted six times, quantified with electrophoresis on $2\%$ agarose gel, and then purified by the Zymoclean Gel Recovery Kit (Zymo Research, D4008, Irvine, CA, USA). About 100 ng of DNA were used for library preparation. The library was prepared using the TruSeq® DNA PCR-Free Sample Preparation Kit (Illumina, San Diego, CA, USA), followed by quality evaluation on the Qubit@ 2.0 Fluorometer (Thermo Fisher Scientific, Waltham, MA, USA) and Agilent Bioanalyzer 2100 system (Agilent, Santa Clara, CA, USA). Library was finally paired-end sequenced (2 × 300) on an Illumina MiSeq PE300 platform (Illumina, San Diego, CA, USA). ## 2.4. Data Analysis The raw fastq files were merged using FLASH [23]. The raw tags were analyzed using the QIIME (v1.9.0) pipeline [24]. All tags were quality filtered. Sequences shorter than 200 nt with an average quality value less than 25, and those containing two or more ambiguous bases, were discarded. The clean tags were then mapped to the Gold database (http://drive5.com/uchime/uchime_download.html (accessed on 5 May 2021)) using UCHIME algorithm, followed by removal of the chimera sequences to identify the effective tags [25]. The operational taxonomic units (OTUs) table was created at $97\%$ similarity using the UPARSE pipeline [26]. Representative sequences from each OTU were aligned to 16S reference sequences with PyNAST [27]. The phylogenetic trees were drawn using FastTree [28]. Annotation analysis was performed using the UCLUST taxonomy and the SILVA database [29,30]. The abundance of OTUs was normalized using a standard sequence number corresponding to the sample with the least sequence. The comparison of OTU numbers used a one-way analysis of variance (one-way ANOVA), followed by the Bonferroni multiple comparisons test. The alpha diversity was calculated to analyze the complexity of species diversity in the sample, including observed species, Chao1, Shannon, Simpson, coverage, and Faith’s PD. The beta diversity, weighted Unifrac and unweighted UniFrac, was calculated to evaluate the differences of samples in species complexity. Principal coordinate analysis (PCoA) was used to visualize differences in bacterial community composition among groups. The linear discriminant analysis coupled with effect size (LEfSe) was performed to identify the differentially abundant taxa between different groups. Pairwise comparisons were made using metagenomSeq. ## 3.1. Sequencing Information In order to evaluate the effect of reproductive status on the cow fecal microbiota, the V4–V5 hypervariable regions of the 16S rRNA gene were sequenced in the microbial communities of 24 samples. A total of 705,988 raw PE reads were generated from these 24 samples (average: 29,416 ± 4914, range: 21,956–36,765). After quality control, 632,192 effective tags were obtained from 24 samples (average: 26,341 ± 4408, range: 19,472–32,926), with an average of 407.67 ± 0.92 bps per tag after the merging of overlapping paired-reads, quality filtering, and removing of chimeric sequences. By the $97\%$ sequence similarity, 6842 OTUs were computationally constructed with 1727.38 ± 405.39 (range: 999–2788) as the mean number of OTUs per sample, and the mean number of OTU in DCNP group was significantly lower than that of BG and FT group ($p \leq 0.01$) (Figure 1). ## 3.2. Microbial Ecology of the Fecal Microbiome These 6842 OTUs taxonomically assigned to microbial 2 Kingdom, 17 phyla, 25 classes, 38 orders, 67 families, 168 genera, and 117 species. According to OTUs’ number, the average abundance of each group at different category levels was evaluated (Figure 2). The fecal microbial communities were dominated by bacteria ($84.58\%$), and archaea were only $15.42\%$ abundant. The most abundant phyla across all 24 metagenomic libraries were Firmicutes ($48.68\%$), followed by Bacteroidetes ($34.45\%$), and Euryarchaeota ($15.42\%$). Other less abundant phyla were Spirochaetes ($0.85\%$), Tenericutes ($0.42\%$), Proteobacteria ($0.07\%$), Actinobacteria ($0.06\%$), Fibrobacteres ($0.02\%$), Cyanobacteria ($0.02\%$), and Planctomycetes ($0.01\%$) (Figure 3). At the genus level, there are 11 genera with more than $1.0\%$ abundance, including Ruminococcaceae UCG-005 ($21.91\%$), Methanobrevibacter ($13.28\%$), Rikenellaceae RC9 gut group ($10.13\%$), [Eubacterium] coprostanoligenes group ($7.10\%$), Prevotellaceae UCG-004 ($6.47\%$), Alistipes ($5.52\%$), Ruminococcaceae UCG-013 ($4.89\%$), Prevotellaceae UCG-003 ($4.61\%$), Ruminococcaceae UCG-014 ($1.78\%$), Methanocorpusculum ($1.42\%$), Christensenellaceae R-7 group ($1.12\%$) (Figure 4). ## 3.3. Microbial Diversity of the Fecal Microbiome The alpha diversity indexes, including observed species, Chao1, Shannon, Simpson, coverage, and Faith’s PD, for four groups were calculated to estimate species richness and diversity (Figure 5). Compared to the BG and FT groups, the observable species, Chao1, and Faith’s PD were significantly lower, and coverage was significantly higher in the DCNP group ($p \leq 0.05$, Kruskal–Wallis test), but without statistical significance in the DCP group ($p \leq 0.05$, Kruskal–Wallis test). Further, no statistically significant difference was shown among the four groups in Shannon and Simpson ($p \leq 0.05$, Kruskal–Wallis test). Based on the Jaccard and Bray–Curtis methods, principal coordinated analysis (PCoA) of beta diversity was further used to analyze compositional differences in fecal microbiota among four groups (Figure 6). The samples in the BG, FT, and DCP groups were clustered together according to their particular groups, while the samples in the DCNP group were spread out. The samples in the BG and FT groups tended to cluster together in accordance with PCoA results. Both Jaccard and Bray-Curtis distances showed significant differences among the four groups (ANOSIM, $p \leq 0.01$), except between groups DCP vs. DCNP (ANOSIM, $p \leq 0.05$). ## 3.4. Microbial Taxonomy and Function Analysis Linear discriminant analysis effect size (LEfSe) was used to discover the differential microbiota and estimate their effect size. Based on LEfSe, it restrictively analyzed the successfully annotated species and detected 60 taxa significantly different in abundance among four groups. There were 7 taxa significantly more abundant in the BG group, 17 in the FT group, 8 in the DCNP group, and 28 in the DCP group (Figure 7). The most representative taxa were Rikenellaceae and Rikenellaceae_RC9_gut_group in the DCP group, Prevotellaceae and Prevotellaceae_UCG_003 in the FT group, Christensenellaceae_R_7_group in the DCNP group, and Firmicutes, Clostridia, Clostridiales, and Ruminococcaceae in the BG group. The metagenomeSeq was further used to compare the abundance of OTUs between each group. The abundance of 4, 12, and 23 OTUs was significantly increased, while that of 1, 2, and 17 OTUs was significantly reduced in the FT, DCNP, and DCP groups compared with the BG group, respectively. In the three comparison groups, the abundance of six common genera (>$1\%$), namely Prevotellaceae UCG-003, Ruminococcaceae UCG-013, [Eubacterium] coprostanoligenes group, Rikenellaceae RC9 gut group, Methanobrevibacter, and Ruminococcaceae UCG-005, was identified as a significant difference (Figure 8). There were 16 and 21 OTUs that were significantly increased, and 2 and 19 OTUs that were significantly reduced, in the DCNP and DCP groups compared with the FT group, respectively. A total of 8 common genera, such as the Christensenellaceae R-7 group, Ruminococcaceae UCG-014, Prevotellaceae UCG-003, Ruminococcaceae UCG-013, [Eubacterium] coprostanoligenes group, Rikenellaceae RC9 gut group, Methanobrevibacter, and Ruminococcaceae UCG-005, were observed to have significant differences (Figure 8). Furthermore, in the DCP group, the abundance of 4 OTUs decreased compared with the DCNP group. The relative abundance of 2 common genera, Methanobrevibacter and Prevotellaceae UCG-003, in the DCNP group was higher than that in the DCP group. ## 4. Discussion The reproductive efficiency and health of cows have always been priorities. The gut microbiota composition plays an important role in the reproductive performance throughout a female’s lifetime. In humans, the gut microbiome has been considered to affect every stage and level of female reproduction, including follicle and oocyte maturation in the ovary, fertilization and embryo migration, implantation, the whole pregnancy, and parturition [31,32,33,34]. The gut microbial communities can influence reproductive success from mate choice to healthy pregnancy and successfully producing offspring in animals [35,36]. Recent studies reported that bovine vaginal and fecal microbiome associated with differential pregnancy outcomes [37,38]. The fecal microbiome predicted pregnancy with a higher accuracy than that of the vaginal microbiome [38]. In this study, the fecal microbiota were investigated in 4 different reproductive stages and revealed the dramatic changes in fecal microbiota diversity and composition among 4 groups using the sequencing of the 16S rRNA gene. In this study, Firmicutes, Bacteroidetes, and Euryarchaeota were the three most dominant phyla, and Ruminococcaceae UCG-005, Methanobrevibacter, and Rikenellaceae RC9 gut group were the three most dominant genera in the cow fecal samples. They were consistent with several earlier studies [39]. In previous studies, Bacteroidetes (51.6~$59.74\%$) and Firmicutes (27.6~$38.74\%$) together comprised up to 81.6~$93.20\%$ of the cow fecal bacterial abundance [13,40,41]. The phylum Euryarchaeota was predominant within the Archaea and accounted for around $0.25\%$ of the cow fecal microbiota abundance [41,42]. Ruminococcaceae UCG-005, Methanobrevibacter, and Rikenellaceae RC9 gut groups predominate in the Firmicutes, Euryarchaeota, and Bacteroidetes phyla, respectively. Ruminococcaceae UCG-005 and Rikenellaceae RC9 gut group usually had a relative abundance >$8\%$ of fecal microbiota in dairy cows. The genus Methanobrevibacter comprised more than $80\%$ of the phylum Euryarchaeota in cow fecal Archaea [13,43]. The age and pregnancy are two important factors contributing to the species richness and diversity of fecal microbiota. The alpha diversity index, observed species, Chao 1, coverage, and Faith’s PD were significantly different among the BG, FT, and DCNP groups in this study. However, the cluster among four groups was significant, separating BG and FT groups from DCNP and DCP groups by PCoA based on Jaccard and Bray–Curtis distances. These also showed that the greatest differences in microbiome trajectories occurred between nulliparous and primiparous animals [19]. Nulliparous animals had higher gut microbial diversity than that of primiparous animals, and pregnancy could increase gut microbial diversity [19,20]. The effect of age is more related to calving. The increase in alpha diversity during pregnancy could be due to an increase in nutrient requirements during lactation. The first birth is the most important physiological change in a cow’s life, and pregnancy increases metabolism. In order to further identify important taxa differed among groups, LEfse and metagenomeSeq analyses were conducted. LEfse analysis is helpful to discover the important differential taxa (biomarkers) and estimate their effect sizes. The LEfSe analysis revealed that the most differentially abundant taxa were in DCP, followed by FT, DCNP, and BG. The metagenomeSeq analyses showed that the comparisons with the most significant differences in microbial taxa are BG vs. DCP and FT vs. DCP, followed by FT vs. DCNP, BG vs. DCNP, BG vs. FT, and DCNP vs. DCP. These suggested that parturition experience is one of the most important factors to impact cattle gut microbiome trajectory. Previous study also reported that the most difference in microbiome trajectory occurred between nulliparous and low parity sows [19]. There was significant difference between multiparous and primiparous cows on vaginal and uterine microbiotas [44,45]. The most representative taxa were associated with energy metabolism and inflammation. Mice fed with high-fat diet increased the richness of gut microbial Rikenellaceae_RC9_gut_group. The high-fat diet also increased the risks of intestinal pathogen colonization and inflammation [46]. Supplementation of probiotics increased the relative abundance of Prevotellaceae_UCG_003, which improved the energy status of the beef steers [47]. Fibrolytic enzyme increased the relative abundance of Christensenellaceae_R_7_group, which improve the average daily gain and feed conversion ratio of lambs [48]. The ruminococcaceae family is the predominant acetogen in the cattle rumen, which is related to cellulose and hemicellulose degradation [49]. The carbohydrate resource and the fiber decomposition process in diet contribute to the different abundances of Ruminococcaceae UCG-005, Ruminococcaceae UCG-013, Ruminococcaceae UCG-014, and other Ruminococcaceae in cattle feces [49,50]. Methanobrevibacter is another common inhabitant of the cattle rumen, which can reduce CO2 with H2 to form methane [51,52].*The serum* cholesterol concentration tended to be lower after feeding Eubacterium coprostanoligenes to germ-free mice [53]. Thus, gut microbes are involved in changes in energy intake and immunity during cattle adaption to pregnancy. ## 5. Conclusions In conclusion, this study investigated the difference in fecal bacterial communities between primiparous and multiparous cows during non-pregnancy and pregnancy. The results revealed that pregnancy increased the relative abundance and diversity of fecal microbiota, while aging reduced those traits. In addition, primiparous were related to a profound alteration of the fecal microbiota. The most representative taxa included Rikenellaceae_RC9_gut_group, Prevotellaceae_UCG_003, Christensenellaceae_R_7_group, Ruminococcaceae UCG-005, Ruminococcaceae UCG-013, Ruminococcaceae UCG-014, Methanobrevibacter, and [Eubacterium] coprostanoligenes group, which were associated with energy metabolism and inflammation. In the future, further functional studies will be able to treat dysbiosis and prevent disease development during pregnancy by using probiotics or fecal transplantation.
# Bamboo Plant Part Preference Affects the Nutrients Digestibility and Intestinal Microbiota of Geriatric Giant Pandas ## Abstract ### Simple Summary Bamboo part preference and a panda’s age have been shown to shift the gut microbiota composition of the giant panda, thus eliciting changes in their nutrient utilization capacity. The present study compared the differences in nutrient digestibility and fecal microbiota composition between adult and geriatric captive giant pandas when fed exclusively with a diet comprising of either bamboo shoots or leaves. Bamboo part preference exerted a significant effect on nutrient digestibility and fecal microbiota composition in both adult and aged giant pandas. Bamboo part dominated over age in shaping the nutrient digestibility and gut microbiota composition of giant pandas. ### Abstract Bamboo part preference plays a critical role in influencing the nutrient utilization and gastrointestinal microbiota composition of captive giant pandas. However, the effects of bamboo part consumption on the nutrient digestibility and gut microbiome of geriatric giant pandas remain unknown. A total of 11 adult and 11 aged captive giant pandas were provided with bamboo shoots or bamboo leaves in the respective single-bamboo-part consumption period, and the nutrient digestibility and fecal microbiota of both adult and aged giant pandas in each period were evaluated. Bamboo shoot ingestion increased the crude protein digestibility and decreased the crude fiber digestibility of both age groups. The fecal microbiome of the bamboo shoot-fed giant pandas exhibited greater alpha diversity indices and significantly different beta diversity index than the bamboo leaf-fed counterparts regardless of age. Bamboo shoot feeding significantly changed the relative abundance of predominant taxa at both phylum and genus levels in adult and geriatric giant pandas. Bamboo shoot-enriched genera were positively correlated with crude protein digestibility and negatively correlated with crude fiber digestibility. Taken together, these results suggest that bamboo part consumption dominates over age in affecting the nutrient digestibility and gut microbiota composition of giant pandas. ## 1. Introduction The giant panda (Ailuropoda melanoleuca) is a highly specialized herbivorous species of ursid that consumes bamboo as the primary and almost exclusive diet. Unlike most herbivores, the giant panda has no apparent internal gastrointestinal adaptions to its bamboo-dominated diet, and exhibits a short digestive tract with a rapid passage of digesta, which is similar to the gastrointestinal tract morphology of most carnivores [1]. The extremely high amount of bamboo consumption each day and low energy expenditure can partly explain how giant pandas persist solely on bamboo, a high fibrous plant with low nutritional value and digestibility [2]. However, the giant panda has been shown to lack homologs of the enzymes needed for the degradation of structural carbohydrates, the key component of bamboo [3]. It has thus been believed that the utilization and extraction of nutrients from the bamboo diet largely depends on the gut microbiome of the giant panda, as the giant panda gut microbiome has been found to exhibit a high abundance of putative genes involved in carbohydrate degradation, suggesting high utilization potential of structural polysaccharides [1,4]. Both wild and captive pandas exhibit seasonal changes in bamboo part preference, with shoots consumed in spring and summer, leaves in autumn and winter, and culms in the transition period, namely later winter and early spring [5,6]. Dietary changes are an important factor influencing the composition and function of the gut microbiome [7]. Evidences have been accumulated to show the giant panda’s gut microbiota are shaped by the seasonally-driven shifts in bamboo part preference, as the nutrient content in different parts of bamboo varies significantly, with higher cellulose, hemicellulose, and starch, as well as lower proteins, in the leaves and culms than in shoots [3,8,9]. Gut microbiota has been shown to significantly affect the nutrient utilization capacity and health status of the host [10]. In captive giant pandas, the apparent digestibility of bamboo parts differed significantly, resulting in different degrees of nutrient retention used by gut microbes in the hindgut [8]. Therefore, the changes in gut microbiome elicited by different bamboo part consumption would significantly affect the nutrient digestibility of the giant pandas. Aging is an inevitable biological process in an organism that leads to an increased risk of many diseases [11]. In terms of longevity, captive giant pandas generally have a lifespan of almost 30 years, and individuals older than 20 are considered to be “geriatric” because the reproduction process of the giant panda generally ends after this age [12]. Aging has been proven to significantly shape the structure of gut microbiota and affect the immune and metabolic functions of giant pandas [13]. Likewise, impaired digestive function and higher risk of gastrointestinal disorders have been recognized in aged giant pandas [12]. The seasonal variation in bamboo part consumption has been shown to significantly affect the nutrient digestibility of captive giant pandas [6]. However, little is known about the effects of bamboo part preference on aged giant pandas, especially the changes of gut microbiome and nutrients digestibility. To address this issue, the nutrients digestibility and gut microbiota composition were compared between adult and older captive giant pandas when fed exclusively with a diet comprising of either shoots or leaves. ## 2.1. Ethics Statement All protocols for the present study that involved animal care and treatment were approved by the Institutional Animal Care and Use Committee of Chengdu Research Base of Giant Panda Breeding (No. 2020010). ## 2.2. Study Subjects and Animal Husbandry A total of 11 adult (aged 9–17 years, average age was 13) and 11 geriatric (aged 20–37 years, average age was 25) captive giant pandas were the subjects of the present study. All subjects were singly housed at the Chengdu Research Base of Giant Panda Breeding (CRBGPB, Chengdu, Sichuan, China), and all were considered healthy and were not under any medical treatment during the study period. The ambient temperature was maintained at 15 °C–22 °C, and the air humidity was 65–$75\%$. All giant pandas were fed according to the normal husbandry practices of the CRBGPB as described in Wang et al. [ 6]. Bamboo was provided to giant pandas three times each day (08:00, 14:00, and 20:00). In the present study, giant pandas were given free access to bamboo and water, and the specific bamboo part was offered according to the seasonal shifts. In CRBGPB, bamboo shoots of *Phyllostachys nidularia* Munro were consumed by pandas in autumn and bamboo leaves of *Bashania fargesii* were provided to pandas in winter. In addition to the supply of bamboo parts, dietary supplements were provided daily and of the same mass to all subjects. In this study, both adult and geriatric pandas were provided with bamboo shoots for 3 months and bamboo leaves for 3 months: bamboo shoot-fed adult (AS), bamboo leaf-fed adult (AL), bamboo shoot-fed old (OS), and bamboo leaf-fed old (OL) giant pandas. ## 2.3. Sample Collection At the last day of each period during which pandas were offered the corresponding bamboo part, fecal samples were collected from each giant panda. For each panda, the spontaneous excreted fecal samples were collected within 10 min of defecation after the feeding in the morning. To avoid contamination, samples were collected only after the floor was cleaned and disinfected. Furthermore, the outer layer of feces that contacted the floor was discarded and only fecal parts that did not touch the floor were kept and stored at −80 °C pending further analysis. ## 2.4. Apparent Nutrient Digestibility Measurement During the last three days of each single-bamboo-part consumption period, the apparent nutrient digestibility of the corresponding bamboo part was determined in both adult and older giant pandas. The amount of ingested food and excreted feces of each individual giant panda was weighed. The bamboo samples that pandas consumed and fecal samples were collected twice a day, weighed, and immediately stored at 4 °C. During the next day, corresponding proportions of fecal samples were kept and mixed according to the amount of daily excreted feces. Finally, about 1 kg of bamboo leaves and 1.5 kg of the corresponding fecal samples, as well as 5 kg of bamboo shoots and the corresponding fecal samples, were kept at −80 °C for long-term storage. The bamboo and fecal samples were dried, ground, and sieved through a 0.45 mm sieve, then mixed, sampled, and stored at −20 °C. The chemical components of the bamboo and fecal samples were determined according to the AOAC analysis method [14]. An oven drying method was adopted to measure the dry matter (DM) content, the Kjeldahl method was used to determine the crude protein (CP) content, the Soxhlet extraction method was applied to evaluate the ether extract (EE) content, the continuous extraction of samples by dilute acids and bases was used to measure crude fiber (CF), and lastly, the oxygen bomb calorimeter calorimetric method was used to analyze the gross energy (GE) concentration of bamboo and fecal samples. The calculation equation of apparent nutrient digestibility was as follows:Apparent digestibility= Daily intake × Nutrient substance (Bamboo)− Daily feces × Nutrient substance (Feces)Daily intake × Nutrient substance (Bamboo) ## 2.5. Genomic DNA Extraction from Feces and Sequencing The genomic DNA of each fecal sample was isolated with the QIAamp Fast DNA Stool Mini Kits (Qiagen, Beijing, China) following the manufacturer’s instructions. The integrity and concentration of obtained DNA samples were assessed visually by agarose gel electrophoresis or measured using a NanoDrop ND-1000 device. Sterilized water was used as a negative control sample, and was included in the DNA isolation process, which showed no detectable PCR product. The common primers 515F and 806R were used to amplify the V4 region of the bacterial 16S rRNA gene, and the resulting PCR products were pooled and purified by using the Agencourt AMPureXP beads (Beckman Coulter, Brea, CA, USA) along with the MinElute PCR Purification Kit (Qiagen, Beijing, China). After pooling and purification, these amplicons were then used to construct Illumina libraries with the Ovation Rapid DR Multiplex System 1-96 (NuGEN, San Carlos, CA, USA). All of the sample libraries were sequenced on the Illumina MiSeq platform with a PE250 sequencing strategy (Novogene, Beijing, China). The raw data were deposited in the NCBI BioProject database with the accession number PRJNA916390. ## 2.6. Fecal Microbiota Analysis The raw *Illumina data* were processed by Mothur software v1.3.6 (MI, USA) [15]. The high-quality paired-end sequences, which were obtained by removing the primer and barcode sequence, and also the low-quality reads, were assembled into tags with overlapping relationships. The library size of each sample was randomly subsampled into the minimum sequencing depth to minimize the biases caused by sequencing depth between samples. The USEARCH v7.0.1001 [16] was applied to cluster tags into OTUs based on $97\%$ cut-off. The representative sequence of each OTU cluster was used for taxonomic classification against the Ribosomal Database Project database with RDP v2.6 [17]. The OTU abundance table and the OTU taxonomic assignment table laid out from the Mothur software were processed with R studio v3.4.1 [18] to calculate alpha diversity indexes of communities, as well as the beta diversity index and the Bray–Curtis distance [19]. The structural dissimilarity of the microbiota communities across the samples were visualized by non-metric multidimensional scaling (NMDS) analysis based on the Bray–Curtis distance matrix. ## 2.7. Statistical Analysis For nutrient digestibility parameters, the statistical analysis was performed using SAS version 9.4 (SAS Institute Inc., Cary, NC, USA). Giant panda was considered the experimental unit for all analyses ($$n = 11$$ per treatment), and the results were expressed as means and SEM. The main effects of bamboo part and age, and the interaction between bamboo part and age were determined via two-way ANOVA. After transforming non-normal distributed data to approximately conform to normality by SAS software, the alpha indexes [20] including Observed species, Chao 1, Shannon and Simpson index as well as the relative abundance of top 10 phyla and top 30 genera were tested for significance with the one-way ANOVA, followed by Tukey’s test to evaluate the differences between treatments. Data were presented as mean ± SE. The intragroup statistic differences in beta diversity based on the Bray–Curtis distance were assessed using the one-way ANOSIM test with 10,000 permutations. Spearman’s correlation between the gut microbiota composition and nutrient digestibility parameters were calculated by the ggcor package within R software version 3.6.1 [18]. Only correlations with Spearman’s coefficient r > 0.5 and $p \leq 0.05$ were used to generate the network graph, which was visualized and manipulated by Gephi version 9.2 [21]. The differences were considered statistically significant when the p values were less than 0.05. ## 3.1. Bamboo Part and Age Affect Apparent Nutrient Digestibility of Giant Pandas A significant effect of age ($F = 4.86$, df = 1, $$p \leq 0.04$$) on the dietary gross energy utilization efficiency was observed showing that aged giant pandas had weaker energy extraction capacity from their diet compared to their younger counterparts (Table 1). There was a significant effect of bamboo part (($F = 203.23$, df = 1, $p \leq 0.001$) for crude protein digestibility, indicating that bamboo shoot ingestion increased the crude protein digestibility of both adult and aged giant pandas (Table 1). There was a significant effect of bamboo part ($F = 13.65$, df = 1, $$p \leq 0.001$$) and age ($F = 11.44$, df = 1, $$p \leq 0.002$$) as well as a significant bamboo part × age interaction ($p \leq 0.05$) for ether extract digestibility (Table 1). This demonstrates that bamboo shoot feeding increased ether extract digestibility of aged rather than adult giant pandas when compared to bamboo leaf ingestion. Results indicated that bamboo shoot-fed giant pandas had lower crude fiber digestibility than bamboo leaf-fed counterparts ($F = 16.06$, df = 1, $p \leq 0.001$, Table 1). ## 3.2. Bamboo Part and Age Affect Fecal Microbial Profiles of Giant Pandas After the pre-processing of raw reads, high-quality tags were generated from all samples ranging from 57,136 to 91,531, which were subsampled to 57,136 to avoid the bias induced by the sequencing depth between samples. A total of 3,728 OTUs were obtained by clustering these tags at a $97\%$ similarity cutoff. The fecal microbiome of the bamboo shoot-fed giant pandas exhibited greater observed species ($F = 4.65$, df = 3, $$p \leq 0.01$$), Chao1 ($F = 56.08$, df = 3, $p \leq 0.001$), Shannon ($F = 62.11$, df = 3, $p \leq 0.001$), and Simpson index ($F = 5.01$, df = 3, $$p \leq 0.005$$) values than the bamboo leaf-fed counterparts regardless of age (Figure 1). The inter-group Bray–Curtis distance was significantly higher than the intra-group when giant pandas were fed with different bamboo parts independent of age ($F = 25.49$, df = 5, $p \leq 0.001$), otherwise there was no difference in the inter-group and intra-group Bray–Curtis distances (Figure 2A). The NMDS-based map also showed that the fecal microbiome of giant pandas could be sorted into two clusters by bamboo part consumption rather than age (Figure 2B), indicating the dominant role of bamboo part consumption in shaping the fecal microbiome of both adult and old giant pandas. The predominant phyla in feces of AS, AL, OS, and OL pandas were Firmicutes and Proteobacteria (Figure 3A, Table S1). Bamboo shoot feeding was found to decrease the relative abundance of Firmicutes and increase the relative abundance of Proteobacteria in adult giant pandas rather than old giant pandas compared to bamboo leaf consumption (Figure 3B). Additionally, bamboo shoot feeding increased the relative abundance of Acidobacteriota, Actinobacteria, and Chloroflexi as well as decreased the relative abundance of Bacteroidetes in both adult and old giant pandas compared to bamboo leaf feeding (Figure 3C). At the genus level, Escherichia-Shigella and Clostridium_sensu_stricto_1 were the two most abundant bacteria in feces of all four groups (Figure 4A, Table S2). The relative abundance of Cellulosilyticum, Citrobacter, Enterococcus, Lactococcus, Pantoea, Ralstonia, Raoultella, Acinetobacter, Bradyrhizobium, Leuconostoc, Massilia, and Providenicia were higher in feces of bamboo shoot-feeding giant pandas than the bamboo leaf-feeding group regardless of age (Figure 4B,C). Bamboo shoot intake was found to decrease the relative abundance of Streptococcus, Lachnospiraceae_NK4A136_group, and Terrisporobacter in feces of both adult and old giant pandas compared to bamboo leaf consumption (Figure 4B,C). Bamboo shoot feeding increased the relative abundance of *Helicobacter and* decreased the relative abundance of Clostridium_sensu_stricto_1 in feces of adult giant pandas rather than the old group (Figure 4B,C). Compared to bamboo leaf consumption, the decreased abundance of Escherichia-Shigella and increased abundance of Turicibacter, Hafnia-Obesumbacterium, and Weissella were observed in bamboo shoot-fed old giant pandas rather than the adult group (Figure 4B,C). ## 3.3. The Correlation between Fecal Microbiota and Nutrient Digestibility in Giant Pandas The genus *Streptococcus and* Lachnospiraceae_NK4A136_group were significantly positively correlated with crude fiber digestibility, whereas the genus Lactococcus, Turicibacter, Raoultella, Citrobacter, Enterococcus, Pantoea, Cellulosilyticum, Weissella, Providencia, and Hafnia-Obesumbacterium were significantly negatively correlated with crude fiber digestibility ($p \leq 0.05$, Figure 5). The genus Streptococcus, Terrisporobacter, and Lachnospiraceae_NK4A136_group were significantly negatively correlated with crude protein digestibility, whereas the genus Lactococcus, Turicibacter, Raoultella, Citrobacter, Enterococcus, Ralstonia, Pantoea, Cellulosilyticum, Weissella, Providencia, Helicobacter, Hafnia-Obesumbacterium, Massilia, Bradyrhizobium, Leuconostoc, and Acinetobacter were all significantly positively correlated with crude protein digestibility ($p \leq 0.05$, Figure 5). The genus Providencia was significantly positively correlated with ether extract digestibility ($p \leq 0.05$, Figure 5). ## 4. Discussion Despite exhibiting a carnivore’s characteristic simple gastrointestinal tract, giant pandas acquire the majority of the required nutrients from bamboo. Because of the limited digestibility of plant cellulose by the giant panda genome, it was suggested that the gut microbiome may play a vital role in the digestion of this highly fibrous bamboo diet [22]. Seasonal dietary shifts in bamboo part selection have been observed in both wild and captive giant pandas, and have been shown to extensively shape the host microbiome [5]. The bamboo part preference during different seasons has been shown to significantly influence the nutrient digestibility of adult captive giant pandas, which is associated with changes in the gut microbiota composition [6]. Owing to the improvements in husbandry and veterinary care, the number of geriatric pandas in zoological institutions has increased in recent years. The aging process in giant pandas elicits a significant change in the gut microbiome, indicating that geriatric pandas exhibit a different gut microbiota composition than younger pandas [12]. While studies in humans and other animals have shown that there may exist an interaction between diet and aging in regulating host phenotype and shaping gut microbiota composition [23,24], such information in different bamboo part-fed geriatric and adult pandas remains unknown. Unlike studies in other animals showing similar nutrient digestibility between adult and senior individuals [25,26], lower energy digestibility was found in aged giant pandas compared to the adults in the present study, indicating the declined energy extraction capacity from food in aging giant pandas. Giant pandas feed almost exclusively on bamboo, of which the different plant parts exhibit significantly different nutrient compositions [4]. Wang et al. [ 6] showed that the bamboo part exerted a significant effect on nutrient digestibility in giant pandas. Bamboo shoots consumption has been shown to increase the crude protein digestibility and decrease the crude fiber digestibility of giant pandas [6]. Consistently, higher crude protein digestibility and lower crude fiber digestibility were observed in bamboo shoot-fed adult and geriatric giant pandas compared to those fed with bamboo leaves in the present study, which might be attributed to the inhibition of crude protein utilization induced by the higher level of fiber in bamboo leaves [27]. In rodent models, the aging process was found to decrease lipid absorption through reducing the pancreatic lipase activity [28]. In this study, bamboo shoot consumption increased the ether extract digestibility in aged giant pandas rather than in adults compared to bamboo leaf feeding. This finding might be related to the lower lipase activity in the small intestine of senior giant pandas and the higher ether extract content in bamboo leaves. Compared with adults, the ether extract in bamboo leaves was too high for aged giant pandas to fully digest, resulting in the lower digestibility of ether extract in senior pandas fed with bamboo leaves than those fed with bamboo shoots [6]. Accumulated evidences have demonstrated the possible role of the gut microbiota in the regulation of nutrient harvest in humans and monogastric animals [29,30]. More typically, as the giant panda lacks enzymes for the digestion of bamboo, it has thus been suggested that the giant panda appears to have no alternative but to rely on symbiotic gut microbes to extract nutrients from its highly fibrous bamboo diet [31]. A previous study contended that dietary shifts induced changes in nutrient digestibility in captive giant pandas and were associated with the alteration of the microbiota composition [6]. Both bamboo plant part and age have been shown to play a critical role in shaping the gut microbiota profile in captive giant pandas [7,8,12], however the interaction between bamboo plant part and age on intestinal microbiota composition, as well as the relationship between the interaction-induced gut microbiota shifts and nutrient digestibility of the captive giant pandas, remains unknown. Consistent with the previous study showing a more diverse gut microbiome in bamboo shoot-fed giant pandas than their counterparts [8], we found that bamboo shoot feeding increased the observed species, Chao1, Shannon, and Simpson indexes in both adult and old giant pandas. This indicates that there is a more abundant and diverse microbiome in bamboo shoot-fed giant pandas. Research showed that the elderly pandas exhibited lower bacterial species richness and diversity than the younger individuals [12,22]. However, in this study, the main effect of age on the alpha diversity indices of microbiome in giant pandas was not observed, which is inconsistent with findings in rodents in which the microbial composition was generally affected by age rather than diet [32]. This indicates the predominant role of dietary shifts rather than age in shaping the gut microbiota of giant pandas. The dissimilarity distance analysis in the present study also confirmed that the fecal microbiota of giant pandas could be sorted into two clusters by bamboo part independent of age. It has been demonstrated that phyla Firmicutes and Proteobacteria were the most predominant bacteria in the fecal microbiome of giant pandas [3,4]. In the present study, bamboo shoot feeding decreased the abundance of Firmicutes and increased the abundance of Proteobacteria in the adult group rather than the geriatric group compared to bamboo leaf feeding. This is contradictory with the previous finding that the relative abundance of Proteobacteria was the highest in the bamboo-leaf fed giant pandas [8]. However, in vivo studies in rodents revealed that bamboo shoot-derived components promoted the colonization of bacteria belonging to Proteobacteria and decreased the abundance of *Firmicutes bacteria* in the gut [33,34]. The contradictory results might stem from the different study subjects or use of different bamboo species. Previous studies in monogastric animals showed that the relative abundance of Acidobacteriota was positively correlated with the intake amount of dietary protein and the relative abundance of Bacteroidetes was negatively correlated with dietary protein level [35,36]. In the present study, the higher abundance of Acidobacteriota and lower abundance of Bacteroidetes were observed in bamboo shoot-fed giant pandas regardless of age, which might be attributed to the higher amount of protein in bamboo shoots than bamboo leaves [6]. Consistent with the previous findings [4], the genera Escherichia-Shigella and Clostridium_sensu_stricto_1 were predominantly present in the fecal microbiome of giant pandas in this study. Bamboo shoot consumption has been shown to decrease the abundance of Escherichia-Shigella and increase the abundance of Weissella in the feces of giant pandas [8]. Our study further revealed that the bamboo shoot feeding-induced changes in Escherichia-Shigella and Weissella abundances were only observed in aged giant pandas. In addition, the decreased abundance of Clostridium_sensu_stricto_1 was observed in bamboo shoot-fed adults rather than geriatric giant pandas compared to the bamboo leaf group. This finding was consistent with the previous study showing the higher abundance of Clostridium_sensu_stricto_1 in the bamboo leaf consumption stage versus bamboo shoot consumption stage [3]. The inconsistent findings demonstrate that the genus Clostridium_sensu_stricto was not significantly enriched in the bamboo leaf stage and showed low sensitivity to the host’s seasonal dietary changes [1]. These contradictory results regarding the effects of bamboo part consumption on predominant genera abundance in giant pandas further suggest that the distribution of bacteria at the genus level in giant pandas might be dependent on the interaction effect of dietary shifts and age of the host. Seasonal variations in bamboo part selection has been shown to shape the bacteria distribution at the genus level of giant pandas [1,3]. The abundances of genera Cellulosilyticum, Lactococcus, and *Streptococcus were* significantly affected by the consumption of different bamboo parts [8]. Consistently, in this study, bamboo shoot feeding significantly increased the abundance of Cellulosilyticum, Lactococcus and other genera as well as decreased the abundance of *Streptococcus in* feces of both adult and aged giant pandas compared with bamboo leaf ingestion. In monogastric animals, the shifts in gut microbiota composition were found to closely correlate with nutrient digestibility [37]. The genus *Streptococcus was* positively related to crude fiber digestibility in pigs [38]. In this study, the genera *Streptococcus and* Lachnospiraceae_NK4A136_group were positively correlated with crude fiber digestibility in giant pandas, indicating the critical role of these two genera in the utilization of crude fiber of bamboo. High protein diets and ingredient consumptions have been shown to increase the abundance of the genera Turicibacter and Lactococcus in rodents [39,40]. In the present study, the genera Turicibacter, Lactococcus, and other genera were positively correlated with the crude protein digestibility of giant pandas, which indicates that these bacteria may be important for the protein utilization of the bamboo parts. Taken together, the gut microbiota composition of giant pandas was mainly shaped by bamboo part consumption rather than age. ## 5. Conclusions In conclusion, bamboo shoot feeding increased the crude protein digestibility and decreased the crude fiber digestibility of giant pandas regardless of age. Bamboo part consumption dominated over age in shaping the gut microbiota composition of giant pandas. The shifts in taxa distribution at genus level might be responsible for the bamboo part-induced nutrient extraction alterations.
# Systematic Identification and Comparison of the Expressed Profiles of Exosomal MiRNAs in Pigs Infected with NADC30-like PRRSV Strain ## Abstract ### Simple Summary Exosomes play a unique role in virus infection, antigen presentation, and suppression/promotion of body immunity. Porcine reproductive and respiratory syndrome virus (PRRSV) is one of the most damaging pathogens in the pig industry. Here, we used the PRRSV NADC30-like CHsx1401 strain to artificially infect 42-day-old pigs, isolate serum exosomes, and identify 33 significantly differentially expressed (DE) exosomal miRNAs between infection and control groups, and 18 DE miRNAs associated with PRRSV infection and immunity were screened as potential functional molecules involved in the regulation of PRRSV virus infection by exosomes. ### Abstract Exosomes are biological vesicles secreted and released by cells that act as mediators of intercellular communication and play a unique role in virus infection, antigen presentation, and suppression/promotion of body immunity. Porcine reproductive and respiratory syndrome virus (PRRSV) is one of the most damaging pathogens in the pig industry and can cause reproductive disorders in sows, respiratory diseases in pigs, reduced growth performance, and other diseases leading to pig mortality. In this study, we used the PRRSV NADC30-like CHsx1401 strain to artificially infect 42-day-old pigs and isolate serum exosomes. Based on high-throughput sequencing technology, 305 miRNAs were identified in serum exosomes before and after infection, among which 33 miRNAs were significantly differentially expressed between groups (13 relatively upregulated and 20 relatively downregulated). Sequence conservation analysis of the CHsx1401 genome identified 8 conserved regions, of which a total of 16 differentially expressed (DE) miRNAs were predicted to bind to the conserved region closest to the 3′ UTR of the CHsx1401 genome, including 5 DE miRNAs capable of binding to the CHsx1401 3′ UTR (ssc-miR-34c, ssc-miR-375, ssc-miR-378, ssc-miR-486, ssc-miR-6529). Further analysis revealed that the target genes of differentially expressed miRNAs were widely involved in exosomal function-related and innate immunity-related signaling pathways, and 18 DE miRNAs (ssc-miR-4331-3p, ssc-miR-744, ssc-miR-320, ssc-miR-10b, ssc-miR-124a, ssc-miR-128, etc.) associated with PRRSV infection and immunity were screened as potential functional molecules involved in the regulation of PRRSV virus infection by exosomes. ## 1. Introduction Porcine reproductive and respiratory syndrome virus (PRRSV) is a single-stranded positive-strand RNA virus with an envelope structure belonging to the order Nidovirales, family Arteriviridae, genus Betaarterivirus [1,2]. It is spherical or ellipsoidal with a diameter of 50–65 nm under a freezing electron microscope [3,4]. The PRRSV genome is about 15 kb in length with a 5′ cap and a 3′ polyA-tail and contains at least 10 open reading frames (ORFs) flanked by untranslated regions (UTRs) at both the 5′ and 3′ termini [5,6], and is wrapped by nucleocapsid protein, with lipid double-layer coating to form virus particles. Exosomes belong to vesicles with monolayer membrane structures and have the same topological structure as cells [7]. The shape is “cup-shaped” or “disc-shaped” under an electron microscope [8,9]. Exosomes can exist in the circulatory system for a long time, and substances in exosomes can be absorbed by adjacent cells or distant receptor cells and then regulate the receptor cells to participate in the exchange of genetic materials between cells [10,11]. They are mainly composed of membrane surface substances and carried contents, including cell surface receptors, membrane proteins, soluble proteins, lipids, RNA (mRNA, miRNA, lncRNA, and viral RNA, etc.), genomic DNA, mitochondrial DNA [12,13,14]. MicroRNAs (miRNAs) are a class of 18–25 nucleotides (nt) evolutionarily conserved endogenous non-coding single-stranded small RNAs, which inhibit the translation process by inducing the degradation of target mRNA or by binding with 3′ UTR of target mRNA, leading to post-transcriptional gene silencing, then regulating the gene expression at the post-transcriptional level [15,16,17]. It is estimated that miRNAs regulate more than $60\%$ of mammalian genes post-transcriptionally [18,19]. MiRNAs play an important role in intercellular communication and can also be used as a potential functional molecule for disease and virus infection, transmission, and defense [20]. A growing number of studies have shown that miRNAs can be present in body fluids, such as saliva, urine, breast milk, and blood, and act through the body’s fluid circulatory system [21,22]. Exosomal miRNAs are considered to be endogenous regulators of gene expression and metabolism and can indicate various pathological conditions [23,24]. Over the past two decades, it has been shown that miRNAs have crucial roles in the regulation of immune cell development, innate immune responses, and acquired immune responses. Some other miRNAs are reported to impair PRRSV infection through the following ways, directly target the PRRSV genome or PRRSV receptor, or play a role by regulating the host’s innate immune response. The miR-26 family can significantly damage virus replication, and miR-26a can inhibit the replication of type 1 and type 2 PRRSV strains in porcine alveolar macrophages (PAMs) by regulating the type I interferon (IFN) pathway, which is more efficient than miR-26b [25,26]. miR-30c and miR-125b are identified to modulate host innate immune response by targeting the type I IFN pathway and NF-κB pathway, respectively [27,28,29]. MiR-23, miR-378, and miR-505 are antiviral host factors targeting PRRSV and have conservative target sites in type 2 PRRSV strains [30]. At the same time, host miR-506 has been identified to inhibit PRRSV replication by directly targeting PRRSV receptor CD151 in MARC-145 cells [31]. miR-181 also can indirectly inhibit PRRSV replication by down-regulating PRRSV receptor CD163 in blood monocytes and PAMs [32]. In addition, miRNAs can promote PRRSV replication by interfering with basic cell physiology. MiR-24-3p and miR-22 directly target 3′UTR of HO-1 during PRRSV infection to escape the inhibition of heme oxygenase-1 (HO-1), a heat shock protein (also known as HSP32) on PRRSV [33,34]. Pigs are known to be more susceptible to PRRSV and less able to defend themselves against the entry of this pathogen into the organism [35]. In the present study, the innate immunity and acquired immunity of pigs infected with this virus were studied at the molecular level using a strain prevalent in the field. A serum exosome isolation kit, transmission electron microscopy (TEM), nanoparticle tracking analysis (NTA), and Western blot (WB) were used to isolate and identify serum exosomes before and after infection with PRRSV, followed by small RNA sequencing analysis, identification, and analysis of differential expression results using bioinformatics methods to obtain a number of PRRSV-associated serum exosome miRNAs, followed by identification of data results using quantitative real-time PCR (qRT-PCR). ## 2.1. Animal Experiments Six PRRSV antigen and antibody double-negative healthy 42-day-old large white pigs were placed in the pig clean feeding system for isolation, healthcare, and environmental adaptation. All pigs were free to eat and drink without restrictions. When they were familiar with the conditions in the isolator, the pigs were nasally inoculated with 2 mL 105 TCID50/mL PRRSV NADC30-like CHsx1401, which was mentioned by predecessors [36,37]. The blood of the pigs before (control group, $$n = 6$$) and 7 days after (treatment group, $$n = 6$$) virus inoculation was collected from the anterior vena cava for serum isolation. The cellular debris in the serum was removed by centrifugation at 3000 g for 15 min. All animal experiments in our study were approved by the Animal Ethics Committee of the Institute of Animal Science, Chinese Academy of Agricultural Sciences (CAAS) (Beijing, China), IAS2022-130. ## 2.2. Isolation and Purification of Serum Exosomes Exosome isolation and purification were carried out using the exoEasy Maxi kit (QIAGEN, Hilden, Germany, cat. no. 76064) according to the manufacturer’s protocol. ## 2.3. Transmission Electron Microscopy (TEM) Extracted exosome suspensions were spotted onto the formvar carbo-coated copper mesh, and the exosomes were rinsed with PBS and subjected to standard uranyl acetate staining for 3 min at room temperature. After drying for several minutes at room temperature, the grid was visualized and photographed at 100 kV by transmission electron microscope (HT-7700, Hitachi-High Tech, Tokyo, Japan). ## 2.4. Nanoparticle Tracking Analysis (NTA) Extracted exosomes were diluted with 1 × PBS by changing the volume from 10 to 30 μL. After the sample was tested, the concentration and size of serum exosomes were analyzed by an N30E flow nano-analyzer following the manufacturer’s instructions (NanoFCM, Xiamen, China). ## 2.5. Western Blot The extracted exosome samples were added to RIPA lysate mixed with protease inhibitor (Invitrogen, Waltham, MA, USA) and phenylmethylsulfonyl fluoride (PMSF) to extract the exosome protein, which was lysed on ice for 30 min. Then, according to the instructions of the Bradford kit, we quantified the concentration of serum exosome protein. Exosome proteins underwent thermal denaturation. The same amount of protein was separated on $12\%$ SDS-PAGE gel and then transferred to a polyvinylidene fluoride (PVDF) membrane (Millipore, Burlington, MA, USA). It was soaked in TBST containing $5\%$ skimmed milk powder and sealed for 1 h at room temperature. We soaked the membrane in the diluted primary antibody (anti-CD9 antibody, Abcam, Boston, MA, USA, #ab92726; anti-CD81 antibody, Abcam, Boston, MA, USA, #ab109201) overnight at 4 °C, and recovered the primary antibody. We soaked the membrane in the diluted secondary antibody, incubated it at room temperature for 1 h, and recovered the secondary antibody. We laid the washed film of PBST on the fresh-keeping film, added equal volume mixed ECL a/b chromogenic solution, and placed it in the chemiluminescence imager. ## 2.6. Exosomal Small RNA Sequencing and Data Analyses Total RNA from the exosomes was extracted with Trizol according to the manufacturer’s instructions. We then detected the RNA concentration and optical density (OD) value and detected the degradation and purity of RNA with $1\%$ agarose gel electrophoresis. Meanwhile, Agilent Bioanalyzer 2100 was used to detect the integrity of RNA. We used the total RNA of exosomes after quality inspection. According to the manufacturer’s instructions, we used NEB NEXT multiplex small RNA library prep set for Illumina® (Illumina, San Diego, CA, USA). The kit prepared a small RNA cDNA library and sequenced it to produce 50 nt single-end reads by the Illumina Novaseq 6000 platform. All the procedures for small RNA library preparation were accomplished by Novogene (Beijing, China). The data after quality control were aligned to the porcine reference genome (*Sus scrofa* 11.1) using bowtie. Known miRNAs were identified by the miRbase (v22.0) database [38] (https://www.mirbase.org, accessed on 14 January 2022), miRdeep2 (v0.0.5) [39], and miRevo (v1.1) [40] and were used to predict new miRNAs. At the same time, the differential expression analysis for miRNAs was performed by DESeq (v1.24.0) [41], requiring |fold change| > 1.6 and $p \leq 0.05.$ Alignment was performed using MEGA (V11) [42] followed by single base scoring using PHAST (v1.6.9) [43] and evaluation of the most conserved regions of 10 virus genes, including WUH3 (GenBank accession no. HM853973), VR2332 (GenBank accession no. U87392), JXA1 (GenBank accession no. EF112445), CH-1a (GenBank accession no. AY032626), NADC30 (GenBank accession no. HN654459), HUN4 (GenBank accession no. EF635006), HLJZD22-1812 (GenBank accession no. MN648450), SC/DJY (GenBank accession no. MT075480), and Lelystad (GenBank accession no. M96262.2). RNAhybrid (V2.0) [44] was used to predict the binding of the identified miRNA sequence to the 3′ UTR of the CHsx1401 virus genome. MiRanda (v3.3a) and RNAhybrid were used to target gene prediction. The clusterProfiler [45] R package was used for GO (Gene Ontology) functional enrichment analysis of target genes and KEGG (Kyoto Encyclopedia of Genes and Genomes) pathway enrichment analysis. ## 2.7. Validation of miRNA Expression by RT-qPCR Total RNA was isolated from serum exosomes using Trizol (Invitrogen, Shanghai, China) according to the manufacturer’s protocol. The isolated RNA was verified by RT-qPCR on samples ($$n = 6$$ per group). cDNA was synthesized according to the instructions of miRNA 1st strand cDNA synthesis (by stem-loop) kit (Vazyme, Nanjing, China), and the fluorescence quantification was performed using ABI 7500 according to the instructions of miRNA universal SYBR qPCR master mix (Vazyme, Nanjing, China). The thermal cycle parameters used were as follows: the first stage: 95 °C for 30 s; Stage 2: 95 °C for 5 s, 60 °C for 34 s, and 40 cycles; Stage 3: 95 °C for 15 s, 60 °C for 1 min, and 95 °C for 15 s. Primer sequences of miRNAs, the U6 gene, were used as a reference [46] and listed in Supplementary Table S1. All qRT-PCR verifications were performed using three biological replicates and with three replicates for each sample. The relative abundance of transcripts was calculated by the 2−ΔΔCt method, and SPSS (v22.0) and GraphPad Prism (v8.0) were used for data analysis and mapping, respectively. $p \leq 0.05$ means the difference is statistically significant. ## 3.1. Relative Value of Antigen and Antibody after Virus Inoculation The results of PRRSV antigen and antibody tests before (day 0) and after the (day 7) challenge are shown in Table 1. The serological detection of the PRRSV antigen and antibody before the challenge was negative, and the antigen was positive after the challenge, indicating that the pigs were successfully infected with CHsx1401. ## 3.2. Isolation and Identification of Serum Exosomes The vesicles isolated from serum were discovered by TEM. Most vesicles can clearly see the concave saucer- or disc-shaped exosomes in the middle. The membrane edge of exosomes is clearly visible, and the morphology is relatively complete (Figure 1A,B). The nanoparticle tracking analysis showed that $95.73\%$ of the exosomes had a diameter of 30–150 nm, mainly around 72.25 nm, with an average diameter of 76.22 nm, which was consistent with the size characteristics of exosomes (Figure 1C). This size range was similar to that detected by TEM and further confirmed the identity of these vesicles as exosomes. Western blot analysis showed that the vesicles isolated from the serum samples were positive for CD9 and CD81 proteins (Figure 1D). The above characteristics conform to the exosome identification standards formulated by the international society for extracellular vesicles (ISEV) in MISEV2018 [47]. ## 3.3. Small RNA Sequencing of Serum Exosomes For each sample, the clean data reached 0.5 Gb, and the Q30 base percentage was above $96.20\%$. The clean reads of each sample were aligned with the pig reference genome. Among the 12 samples, the control group obtained 10,920,887, 10,248,696, 10,109,117, 10,655,494, 9,217,285, and 9,782,523 reads, respectively. The treatment group obtained 11,889,518, 10,593,504, 10,593,504, 12,846,080, 10,105,325, 11,729,451, and 9,789,542 reads, respectively. On average, $77.96\%$ of the total clean reads comprised 19–22 nucleotides (nt) in length (Figure 2A). The reads after quality control accounted for more than $92.59\%$ of the total reads. The processed clean reads were aligned to the porcine reference genome, and the mapped rate of 12 libraries on the genome was more than $92.30\%$, and the mapped rate was $94.98\%$ (Figure 2B). It indicated that the constructed serum exosomal miRNA library was of high quality and suitable for further analysis. Details are listed in Supplementary Table S2. ## 3.4. Differentially Expression Analysis of miRNAs After quantitative analysis of the identified miRNA expression, miRNAs were screened by the thresholds described previously in Section 2.6. A total of 305 miRNAs were obtained before and after inoculation of the CHsx1401 strain (control, $$n = 6$$; treatment, $$n = 6$$). A total of 33 differentially expressed (DE) miRNAs were identified between the two groups, 13 DE miRNAs were upregulated, and 20 DE miRNAs were downregulated in the treatment group (Figure 3 and Supplementary Table S3). ## 3.5. Functional Enrichment Analysis of miRNA Target Genes A total of 7283 target genes were predicted by 33 DE miRNAs, and the functions of target genes were mainly concentrated in the positive regulation of MAPK cascade, lipid metabolism process, regulation of intracellular signal transduction, ERK1 and ERK2 cascade, etc. ( Figure 4A). In terms of molecular functions, the differentially expressed miRNAs target genes mainly focus on GTP-enzyme regulatory activity, kinase activity, nucleoside triphosphatase regulatory activity, and other functions related to signal transduction and energy metabolism (Figure 4B). In addition, among the cell components, the target genes mainly participate in the biological functions of supramolecular polymers, Golgi, autophagosomes, cell surface, early endosomes, etc. ( Figure 4C). The functions of these components are closely related to the formation of exosomes, which also explains the accuracy of the sequencing. KEGG pathway enrichment analysis showed that the target genes were significantly enriched in endocytosis, the MAPK signaling pathway, the Rap1 signaling pathway, the sphingolipid signaling pathway, and the PI3K Akt signaling pathway ($p \leq 0.05$) (Figure 5A). At the same time, the enriched pathways were classified and analyzed. The results showed that the KEGG pathway of the target gene was mainly enriched in environmental information processing, human diseases, and biological systems (Figure 5B). ## 3.6. Targeting Prediction of Serum Exosomal miRNA and PRRSV CHsx1401 Genome According to the phastCons score of a single base after alignment by PHAST, a total of eight most conserved segments (black bands above the peak map) were obtained among the viral genomes (Figure 6). A total of 31 DE miRNAs were found to bind to the conserved segment by predicting the miRNAs bound to the conserved segment. Among them, in the conserved region (14,644–15,020 nt) closest to the 3′ UTR (14,870–15,020) of CHsx1401 genome, 16 DE miRNAs are predicted to bind to it, including 5 miRNAs (ssc-miR-34c, ssc-miR-375, ssc-miR-378, ssc-miR-486, and ssc-miR-6529) that can bind to the 3′ UTR of CHsx1401. Among these miRNAs, only ssc-miR-223 was upregulated after infection, and other miRNAs were downregulated after infection. See Supplementary Table S4 for details. ## 3.7. Screening DE miRNAs Related to Exosome Function and PRRSV A variety of differentially expressed miRNAs related to the function of exosomes and PRRSV were found by functional enrichment analysis of target genes. Among them, 11 DE miRNAs such as ssc-miR-4331-3p, ssc-miR-744, and ssc-miR-320 are involved in exosome uptake, and their target genes are mainly concentrated in the *Ras* gene family, annexin family, and ADP ribosylation gene family. Eighteen DE miRNAs, including ssc-miR-10b, ssc-miR-124a, and ssc-miR-128, participate in immune-related pathways, and their target genes are mainly concentrated in the MAPK gene family, PIK3 gene family, and protein phosphatase gene family. While 11 DE miRNAs are involved in virus invasion, the related target genes are mainly concentrated in the MAPK gene family and protein phosphatase gene family. Furthermore, multiple differentially expressed miRNAs, such as novel_102. Six DE miRNAs, including ssc-miR-320, ssc-miR-423-5p, ssc-miR-4331-3p, ssc-miR-7137-3p, and ssc-miR-744, are co-expressed in exosome function, PRRSV virus invasion, and immune-related pathways, as shown in Figure 7. Details are shown in Supplementary Table S5. ## 3.8. QRT-PCR Assay of DE miRNAs between the Two Groups Five DE miRNAs were randomly selected for verification. According to the qRT-PCR results, the expression of ssc-miR-19a and ssc-miR-32 increased in the treatment group, while ssc-miR-124a, ssc-miR-375, and ssc-miR-34c showed higher expression in the control group, consistent with the sequencing data (Figure 8). ## 4. Discussion PRRSV is still a stubborn pathogen in the global pig industry, causing huge economic losses in the world. At present, vaccination is mainly used to prevent and control PRRSV, among which the modified live (MLV) virus vaccine is the most widely used [48]. Although this vaccine was effective in reducing PRRS outbreaks and incidence, it also greatly increased genetic variation and diversity of the virus and led to viral recombination between wild and live vaccine viruses in the field [49,50]. In recent years, the spread and prevalence of the recombinant virus NADC30-like PRRSV strain have caused multiple outbreaks of porcine reproductive and respiratory syndrome in China. The similarity between CHsx1401 and NADC30 used in this study remained at 92.2–$99.1\%$. Since then, it has become an epidemic strain in China. Exosomes, as mediators of cell communication, are widely found in various body fluids and have unique advantages in disease diagnosis and treatment [51,52]. According to previous reports, exosomes play an important communication role in antigen presentation [53], immune response [53,54], virus replication [54], cancer [55], neurodegenerative diseases [56], angiogenesis [57], tumor cell migration [58] and invasion [59], and have high research value. In this study, high-throughput sequencing technology was used to construct the miRNA expression profile of serum exosomes, and 33 DE miRNAs were identified. As we all know, the host-encoded miRNA can bind with the viral genome and then regulate the replication, synthesis, and release of the virus to limit infection and affect the pathological process [15]. Studies of miRNAs targeting the viral genome have also been repeatedly reported in animals. gga-miR-454 and gga-miR-130b in chicken infectious bursal disease can target the viral genome to inhibit viral replication, while gga-miR-21 directly targets the viral protein VP1 to inhibit viral protein translation [60,61]. In PRRSV studies, ssc-miR-181 specifically binds to a highly conserved region downstream of the viral genome ORF4 and strongly inhibits PRRSV replication [62]. In this study, the expression difference of ssc-miR-181 between the two groups did not reach a significant level. In our study, the genomes of nine different PRRSV viruses were compared with those of the CHsx1401 strain, and the eight most conserved segments were identified. It was predicted that 31 DE miRNAs could bind to the 8 most conserved segments of CHsx1401, and 16 DE miRNAs could bind to the conserved sequences close to the 3′ UTR of CHsx1401. Among them, 5 DE miRNAs (ssc-miR-34c, ssc-miR-375, ssc-miR-378, ssc-miR-486, and ssc-miR-6529) can simultaneously bind to the CHsx1401 3′ UTR. In addition, the upregulated expression of ssc-miR-223 was predicted to bind to the 3′UTR target of the PRRSV genome. The results showed that the conserved sequences of the virus genome might play a key role in its pathogenicity, and the miRNAs that can bind to the conserved sequences between the genomes of different PRRSV strains may have important significance in controlling the pathogenicity of the virus. Some differentially expressed miRNAs have been proven to be related to PRRSV by previous studies and even directly involved in the regulation of PRRSV, including ssc-miR-10b [63], ssc-miR-378 [30], ssc-miR-124a [64], let-7f-5p [65], ssc-miR-744 [66], and ssc-miR-19a [67]. PRRSV can evade host defense by interfering with innate immune response. This process is regulated by many signaling pathways, including the MAPK signaling pathway, PI3K Akt signaling pathway, autophagy, chemokine, and TNF signaling pathway. At present, the MAPK signaling pathway includes three main pathways: ERK$\frac{1}{2}$, JNK, and p38 pathway. Activation of the MAPK cascade can promote host cell apoptosis, assist the virus in escaping the host immune defense response and promote PRRSV replication [68]. Moreover, the activation of c-Jun N-terminal kinases (JNKs) and p38 can also promote the release of the inflammatory factor IL-10 [68,69,70] and enhance the inflammatory effect. In addition to inducing apoptosis, PRRSV can also induce autophagy, which can promote PRRSV replication. The activation of PI3K/*Akt is* necessary for virus entry and promotion of virus replication, and PRRSV-activated Akt inhibits host cell apoptosis by negatively regulating the JNK pathway [71]. TNFα It can play an important role in the induction and regulation of inflammatory response together with other inflammatory factors, but TNF α *Expression is* affected by the negative regulation of PRRSV replication [72]. In the present study, miRNAs (ssc-miR-10b, ssc-miR-122-5p, ssc-miR-124a, ssc-miR-128, ssc-miR-129a-5p, etc.) enriched in these pathways are involved in PRRSV-induced apoptosis, autophagy, and inflammation and are closely associated with viral immune response, immune evasion, and replication. The cell plasma membrane is rich in a variety of lipid rafts, and sphingolipid- and cholesterol-rich in sphingolipids (sphingomyelin and glycosphingolipids) are key molecules of lipid rafts. The recognition of lipids by some proteins of the virus may be a necessary condition for the entry of the virus [73]. Envelope viruses insert viral envelope glycoproteins into lipid rafts at the stage of virus entry, interact with receptors located in lipid rafts, or change from their natural state to activated form to initiate or promote viral internalization/fusion, such as HSV, SARS coronavirus, and piglet epidemic diarrhea virus [73,74]. Previous studies found that the removal of cholesterol from the surface of MARC-145 cells significantly reduced PRRSV infection, demonstrating that inhibition of PRRSV infection was specifically mediated by the removal of cellular cholesterol. Depletion of cell membrane cholesterol significantly inhibited virus entry, particularly virus attachment, and release [75]. Obviously, sphingolipid metabolism can regulate membrane structure and adhesion, which is of great significance in PRRSV virus invasion. Endocytosis was the most significant enrichment in this study. Endocytosis is an important mechanism of exosome uptake by target cells. Previous studies have shown that exosome uptake is an energy-demanding and cytoskeleton-dependent process, which highlights the potential role of endocytosis in this process [76]. It has been proved that there are several pathways that can mediate this process, including phagocytosis, macropinocytosis, clathrin, etc. [ 77,78], which led to different classifications and roles of endocytosed substances. The enrichment of differentially expressed exosomal miRNAs in this pathway indicates that exosomes play an important role in PRRSV infection, and the regulation of content transport and uptake in exosomes may lead to pathophysiological changes in target cells and organs. ## 5. Conclusions Through the identification and bioinformatics analysis of serum exosomal miRNAs from PRRSV-infected pigs, a variety of PRRSV-related pathways and differentially expressed miRNAs were obtained in this study, such as ssc-miR-4331-3p, ssc-miR-744, ssc-miR-320, ssc-miR-10b, ssc-miR-124a, ssc-miR-128, etc., which play potential functional roles in PRRSV-induced immune response, invasion, and exosome uptake. In addition, because a single miRNA can target multiple genes and a single gene is also regulated by multiple miRNAs, there are a number of miRNAs that perform multiple functions in the above pathways. Some miRNAs have been verified to regulate PRRSV infection by acting on key receptors or directly targeting the virus genome, such as ssc-miR-10b, ssc-miR-378, miR-124a, let-7f-5p, ssc-miR-744, ssc-miR-19a, etc. Meanwhile, the present study also predicted a variety of miRNAs that can bind to the most conserved fragment of the 3′ UTR of the CHX1401 virus genome, including ssc-miR-34c, ssc-miR-375, ssc-miR-378, ssc-miR-486, and ssc-miR-6529, which may be important for regulating viral pathogenicity.
# Dietary Fermentation Product of Aspergillus Oryzae Prevents Increases in Gastrointestinal Permeability (‘Leaky Gut’) in Horses Undergoing Combined Transport and Exercise ## Abstract ### Simple Summary Equine leaky gut syndrome is characterized by gastrointestinal hyperpermeability and may be associated with adverse health effects in horses. The purpose was to evaluate the effects of a prebiotic *Aspergillus oryzae* product (SUPP) on the stress-induced leakiness of the gut. For 28 days, 8 horses received a diet containing the prebiotic or an unsupplemented diet (CO). On Days 0 and 28, horses were dosed with a compound (iohexol) that should only leak out of the gastrointestinal tract if the gut walls become leaky. Immediately following iohexol administration, four horses from each feeding group underwent 60 min of transport immediately followed by a moderate-intensity exercise bout of 30 min (EX), and the remaining horses were maintained as sedentary controls (SED). Blood was sampled before iohexol, immediately after trailering, and at 0, 1, 2, 4, and 8 h post-exercise. Blood was analyzed for iohexol, as well as lipopolysaccharide (a compound found in the gastrointestinal tract that can leak out) and serum amyloid A (a marker of inflammatory response). EX resulted in a significant increase in plasma iohexol in both CO and SUPP groups on Day 0; this increase was not seen in SED horses. On Day 28, EX increased plasma iohexol only in the CO feeding group; this increase was completely prevented by the provision of SUPP. It is concluded that combined transport and exercise induce leaky gut. Dietary SUPP prevents this and therefore may be a useful prophylactic for pathologies associated with gastrointestinal hyperpermeability in horses. ### Abstract Equine leaky gut syndrome is characterized by gastrointestinal hyperpermeability and may be associated with adverse health effects in horses. The purpose was to evaluate the effects of a prebiotic *Aspergillus oryzae* product (SUPP) on stress-induced gastrointestinal hyperpermeability. Eight horses received a diet containing SUPP (0.02 g/kg BW) or an unsupplemented diet (CO) ($$n = 4$$ per group) for 28 days. On Days 0 and 28, horses were intubated with an indigestible marker of gastrointestinal permeability (iohexol). Half the horses from each feeding group underwent 60 min of transport by trailer immediately followed by a moderate-intensity exercise bout of 30 min (EX), and the remaining horses stayed in stalls as controls (SED). Blood was sampled before iohexol, immediately after trailering, and at 0, 1, 2, 4, and 8 h post-exercise. At the end of the feeding period, horses were washed out for 28 days before being assigned to the opposite feeding group, and the study was replicated. Blood was analyzed for iohexol (HPLC), lipopolysaccharide (ELISA), and serum amyloid A (latex agglutination assay). Data were analyzed using three-way and two-way ANOVA. On Day 0, the combined challenge of trailer transport and exercise significantly increased plasma iohexol in both feeding groups; this increase was not seen in SED horses. On Day 28, EX increased plasma iohexol only in the CO feeding group; this increase was completely prevented by the provision of SUPP. It is concluded that combined transport and exercise induce gastrointestinal hyperpermeability. Dietary SUPP prevents this and therefore may be a useful prophylactic for pathologies associated with gastrointestinal hyperpermeability in horses. ## 1. Introduction Leaky gut syndrome (LGS) is characterized by gastrointestinal hyperpermeability and increased accessibility of the systemic environment to compounds that are normally sequestered within the gastrointestinal lumen [1]. The contribution of LGS to equine disease is poorly understood, and its mitigation by dietary interventions has not been described in the literature. An MSc thesis from Michigan State University [2] describes a study in which oral phenylbutazone contributed to the development of gastrointestinal hyperpermeability in 18 Arabian horses, suggesting that gastric ulceration, phenylbutazone administration, or both, contribute to the development of LGS in horses. Evidence also implicates diets high in starch as complicit in gastrointestinal hyperpermeability [3]. Exercise is another likely candidate as an LGS risk factor but has not been clearly described in horses. Research in humans, however, provides evidence for a positive correlation between exercise intensity/duration and hyperpermeability of the gastrointestinal tract [4,5,6]. A recent study in eight horses reports that the combination of exercise and trailer transport induces an increase in gastrointestinal permeability, as well as increased serum amyloid A and lipopolysaccharide [7]. Whilst the pathophysiological consequences of LGS are as vaguely characterized as its triggers, there is evidence that, depending on the degree of inflammatory response to luminal toxins, LGS may impair skeletal muscle metabolism [8], and contribute to metabolic dysfunction [9,10], allergies [11,12], and inflammatory diseases such as arthritis [13]. Dietary interventions with evidence for an ability to protect against the development or clinical consequences of LGS will make an important contribution to preserving robust equine health. Perhaps due (at least in part) to the incomplete picture defining the cause-and-effect of LGS, interventions tend to rely heavily on the management of downstream clinical consequences. To the authors’ knowledge, there are currently no feed supplements or pharmaceutical drugs that have been evaluated against the gastrointestinal hyperpermeability that is the cornerstone of LGS. A commonly reported feature of LGS in non-equine species is gastrointestinal dysbiosis, and there is evidence that this dysbiosis contributes to the development of hyperpermeability [14,15,16,17]. Dysbiosis is likely in horses receiving a high-starch diet [3,16], and in horses experiencing physiological stress [16]. Thus, interventions with potential to stabilize gastrointestinal microbiota may protect against the development of hyperpermeability under conditions of stress. Aspergillus oryzae is a filamentous fungus, which has demonstrated the ability to amplify the abundance of probiotic microbes (particularly Bifidobacterium pseudolongum) whilst protecting DSS-challenged mice against colitis [18]. The fermentation product of A. oryzae also promotes fiber-degrading bacteria in the rumen and hindgut when fed to lactating dairy cows [19]. In addition to evidence for a prebiotic-like effect, A. oryzae also exerts a marked anti-inflammatory effect in LPS-stimulated polymorphonuclear cells and improves the structure of gastrointestinal lumen (i.e., villus height–crypt ratio) in broiler chickens [20]. Furthermore, the administration of a postbiotic from A. oryzae to calves prevented the increase in intestinal permeability associated with exposure to high ambient temperature [21]. These data support the hypothesis that A. oryzae protects against stress-induced hyperpermeability by amplifying the abundance of a healthy gastrointestinal microbiome. Accordingly, the purpose of the current study was to evaluate the effects of a fungal prebiotic produced through a proprietary fermentation process with A. oryzae (SUPP; BioZyme Inc.; St. Joseph, MO, USA) on equine gastrointestinal hyperpermeability induced by a combination of trailer transport and moderate-intensity exercise horses. The objectives were to characterize the effect of a dietary A. oryzae prebiotic on the appearance and disappearance of an oral permeability marker (iohexol) in the blood of horses challenged with combined transport and exercise stress, and to correlate observed effects with those on downstream evidence of inflammation (serum amyloid A (SAA)) and translocation of enteric endotoxin (lipopolysaccharide (LPS)). ## 2. Materials and Methods Care and use of animals was reviewed and approved by the University of Guelph Animal Care Committee in compliance with the guidelines published by the Canadian Council on Animal Care (Approval Number 3800). ## 2.1. Horses Eight [8] healthy mares (Age: 14.2 ± 3.7 years; body weight: 570 ± 47.4 kg) from the Arkell Equine Research Station, University of Guelph, were included in the randomized, partial cross-over trial. The horses were group-housed in an open turnout area, with unrestricted access to a large covered shelter bedded with straw, 1st cut Timothy hay, water, and trace mineral salt. Two hundred and fifty [250] g of a $12\%$ maintenance pellet rationa was provided once per day (morning) (Table 1). Horses were all accustomed to a lifestyle that did not include forced exercise. At the beginning of the study, all 8 horses were randomized into one of two feeding groups ($$n = 4$$ per group): Group A: unsupplemented control diet (CO); Group B: diet containing A. oryzae prebioticb (SUPP; 0.02 g/kg BW). SUPP was a textured, unpelleted product and was top-dressed onto the horse’s individual pelleted feed once per day. Horses consumed their pelleted ration with or without SUPP once per day in individual stalls. Once their feed was completely consumed, they were returned to the outdoor turnout area. Within each feeding group, horses were further divided into stress-challenged (EX—see below for details) or non-challenged sedentary controls (SED) ($$n = 2$$ per group per replicate). Horses received their assigned diet for 28 days. On Days 0 and 28, one SED and one EX horse were evaluated in the morning, and a second SED and second EX horse were evaluated in the afternoon. At the end of the 28-day feeding period, horses were washed out for 28 days, and then assigned to the opposite feeding group for an additional 28 days. The trial was then repeated, for a final ‘n’ of 8 per feeding group (i.e., 4 × EX and 4 × SED per feeding group). Horses were tested at the same time of day (morning or afternoon) in both study periods. On study days, horses remained in their turnout area with unrestricted water access, but from which all feed had been removed. Following 12 h of fasting, horses were stalled and administered via nasogastric tube an indigestible marker of gastrointestinal permeability (iohexolc; $5.6\%$ solution, 1.0 mL/per kg BW; 56 mg/kg BW) by a licensed veterinary professional [7]. The procedure was conducted in the absence of any sedation, so as not to interfere with normal gastrointestinal motility [22]. ## 2.2. Stress Challenge Horses were challenged with combined trailer transport and exercise, which we have previously demonstrated to produce a measurable and significant increase in gastrointestinal hyperpermeability [7]. Briefly, following the administration of iohexol, one EX horse was walked onto a 2-horse trailer for a 60 min drive to the Equine Sports Medicine and Reproduction Centre, University of Guelph. Once at the facility, a heart rate (HR) monitord was attached to the horse using a flexible belly-band, and the horse was free-lunged around an indoor arena (5 min’ walk, 10 min trot (left), 10 min trot (right), and 5 min’ walk) on a sand footing for 30 min. Horses were encouraged to achieve an exercise intensity that resulted in a HR of approximately 150 bpm during the trot, in order to encourage the horse to work at or beyond the anaerobic threshold [23]. At the cessation of exercise, EX horses returned to the group housing yard directly and were turned out with unrestricted access to hay and water. This challenge has previously been demonstrated to produce gastrointestinal hyperpermeability in horses [6]. Following the application of topical lidocaine at the jugular groove, blood was sampled from the jugular vein immediately before iohexol administration (P1), immediately after trailering (P2), immediately after exercise (P3), and then 1 (P4), 2 (P5), 4 (P6), and 8 h (P7) post-exercise. Blood samples were cooled on ice, centrifuged within 2 h of collection, and the recovered plasma was frozen (−20 °C) until analysis. Manure samples were collected within 2 min of voiding before the horse walked into the trailer, at the end of 60 min of transport, and the first manure after exercise. ## 2.3. Non-Challenged Controls SED horses received iohexol at the same time as the EX horses, and blood was sampled at the same time as the EX horses. After receiving iohexol they were returned to the group housing area with free access to water. Hay was provided upon return of the EX horse from transport and exercise. ## 2.4. Sample Analysis All chemicals and reagents were purchased from Sigma Aldrichf, unless otherwise stated. Plasma samples were analyzed for systemic inflammation (serum amyloid A and lipopolysaccharide (LPS)) biomarkers, and an exogenous marker of gastrointestinal permeability (iohexol). Plasma iohexol was determined via HPLC (Agilent 1200 series HPLC gradient system), which was used to quantify plasma iohexol (μ g/mL) with UV detection at 254 nm, as previously described [7] (intra- and inter-assay CV: 3.106 and $4.217\%$, respectively). SAA was determined by Eiken Serum Amyloid A latex agglutination assay at a commercial laboratory (Animal Health Laboratory, University of Guelph). Plasma samples, acclimated at room temperature, were analyzed in duplicate for LPS (pg/mL) using an equine-specific quantitative sandwich ELISA kit according to manufacturerh instructions (inter- and intra-assay coefficient of variability: 1.5 and $1.6\%$, respectively). A standard curve was used to generate a linear regression equation, which was used to calculate LPS concentrations in each sample. ## 2.5. Data Analysis Data analysis was conducted using SigmaPloti (Version 14.2). Data are presented as mean ± SD unless otherwise indicated. Normality of data was determined using the Shapiro–Wilk test. Three-way ANOVA was used to detect interactions between feeding groups, stress challenge, and time after iohexol administration. Two-way ANOVA was used to identify significant differences between feeding groups in SED and EX horses on Day 0 and Day 28 with respect to stress challenge and time after iohexol administration. The Holm–Sidak post-hoc test was used to identify significantly different means when a significant F-ratio was calculated. Significance was accepted at $p \leq 0.05.$ ## 3.1.1. Control Diet (Figure 1) Day 0: In SED horses receiving the CO diet, there was no significant change in plasma iohexol at any time between P1 (0.56 ± 0.02 ug/mL) and P7 (0.69 ± 0.04 ug/mL) ($$p \leq 0.26$$). EX horses demonstrated a significant increase in plasma iohexol between P1 (0.52 ± 0.03 ug/mL) and P3 (1.14 ± 0.08 ug/mL) ($$p \leq 0.02$$). Plasma iohexol was significantly higher in EX horses than in SED horses at P2 (SED: 0.71 ± 0.06 ug/mL; EX: 1.02 ± 0.18 ug/mL) ($$p \leq 0.04$$) and P3 (SED: 0.75 ± 0.09 ug/mL; EX: 1.14 ± 0.08 ug/mL) ($$p \leq 0.01$$) (Figure 1). Day 28: In SED horses receiving the CO diet, there was no significant change in plasma iohexol at any time between P1 (0.48 ± 0.04 ug/mL) and P7 (0.60 ± 0.06 ug/mL) ($$p \leq 0.44$$). EX horses demonstrated a significant increase in plasma iohexol between P1 (0.58 ± 0.09 ug/mL) and P3 (1.07 ± 0.06 ug/mL) ($$p \leq 0.006$$). Plasma iohexol was significantly higher in EX horses than in SED horses at P2 (SED: 0.54 ± 0.06 ug/mL; EX: 1.01 ± 0.12 ug/mL) ($p \leq 0.001$), P3 (SED: 0.56 ± 0.07 ug/mL; EX: 1.07 ± 0.12 ug/mL) ($p \leq 0.001$) and P4 (SED: 0.59 ± 0.04 ug/mL; EX: 1.00 ± 0.10 ug/mL) ($p \leq 0.001$) (Figure 1). Day 0 vs. Day 28: In SED horses, plasma iohexol was significantly higher on Day 0 than on Day 28 at P3 and P5 ($$p \leq 0.04$$ and 0.05, respectively). There were no significant differences between Day 0 and Day 28 in EX horses ($$p \leq 0.23$$) (Figure 1). ## 3.1.2. Supplemented Diet (Figure 2) Day 0: In SED horses receiving the SUPP diet, there was a significant increase in plasma iohexol between P1 (0.51 ± 0.03 ug/mL) and P2 (0.87 ± 0.04 ug/mL) ($$p \leq 0.005$$), P3 (0.82 ± 0.06 ug/mL) ($$p \leq 0.02$$) and P4 (0.97 ± 0.09 ug/mL) ($p \leq 0.001$). EX horses demonstrated a significant increase in plasma iohexol between P1 (0.70 ± 0.15 ug/mL) and P3 (1.75 ± 0.19 ug/mL) ($$p \leq 0.01$$). Plasma iohexol was significantly higher in EX horses than in SED horses at P3 (SED: 0.82 ± 0.06 ug/mL; EX: 1.75 ± 0.19 ug/mL) ($p \leq 0.001$) (Figure 2). Day 28: In SED horses receiving the SUPP diet, there was no significant change in plasma iohexol at any time between P1 (0.49 ± 0.05 ug/mL) and P7 (0.70 ± 0.05 ug/mL) ($$p \leq 0.43$$). There was also no significant increase in plasma iohexol in EX horses at any time between P1 (0.87 ± 0.23 ug/mL) and P7 (0.56 ± 0.12 ug/mL) ($$p \leq 0.36$$)(Figure 2). ## 3.1.3. Day 0 and Day 28 in Supplemented and Control Diets On Day 0, iohexol tended to be higher in SUPP than CO horses ($$p \leq 0.053$$). Overall iohexol was significantly elevated in EX horses at P2, P3, ($p \leq 0.001$) and P4 ($$p \leq 0.02$$) compared with P1, but there were no differences between treatment groups (Figure 2) On Day 28, iohexol was significantly higher overall in CO horses compared with SUPP horses ($$p \leq 0.008$$). Overall, iohexol was significantly higher at P3 than P1, but there were no significant differences between treatment groups (Figure 2). ## Control Diet Day 0: In SED horses receiving the CO diet, there was no significant change in SAA at any time between P1 (0.10 ± 0.1 μg/mL) and P7 (0.10 ± 0.1 μg/mL) ($$p \leq 0.78$$). There was also no significant change in EX horses in SAA between P1 (0.22 ± 0.16 μg/mL) and P7 (0.86 ± 0.56 μg/mL) ($$p \leq 0.70$$). Overall, SAA was significantly higher in EX than in SED horses ($$p \leq 0.01$$), but there were no significant differences between groups at any specific time point (Table 2). Day 28: In SED horses receiving the CO diet, there was no significant change in SAA at any time between P1 (0.0 ± 0.0 μg/mL) and P7 (0.10 ± 0.10 μg/mL) ($$p \leq 0.92$$). There was also no significant change in EX horses in SAA between P1 (0.15 ± 0.15 ug/mL) and P7 (0.20 ± 0.20 μg/mL) ($$p \leq 0.96$$). In horses receiving the CO diet, SED horses had significantly lower SAA than EX horses overall ($$p \leq 0.04$$), but there were no significant differences at individual time points (Table 2). Day 0: In SED horses receiving the CO diet, there was no significant change in LPS at any time between P1 (2.10 ± 0.09 pg/mL) and P7 (2.13 ± 0.12 pg/mL) ($$p \leq 0.71$$). There was also no significant change in EX horses in LPS between P1 (2.18 ± 0.06 pg/mL) and P7 (2.21 ± 0.10 pg/mL) ($$p \leq 0.99$$). Overall, LPS was significantly higher in EX than in SED horses ($$p \leq 0.02$$), but there were no significant differences between SED and EX at any specific time point (Table 2). Day 28: In SED horses receiving the CO diet, there was no significant change in LPS at any time between P1 (2.1 ± 0.09 pg/mL) and P7 (2.1 ± 0.05 pg/mL) ($$p \leq 0.94$$). There was also no significant change in EX horses in LPS between P1 (2.14 ± 0.03 pg/mL) and P7 (2.10 ± 0.08 pg/mL) ($$p \leq 0.94$$). Overall, LPS was significantly higher in EX than in SED horses ($$p \leq 0.004$$), but there were no significant differences between groups at specific time points (Table 2). ## Supplemented Diet Day 0: In SED horses receiving the SUPP diet, there was no significant change in SAA at any time between P1 (0.33 ± 0.33 μg/mL) and P7 (0.15 ± 0.15 μg/mL) ($$p \leq 0.71$$). There was also no significant change in EX horses SAA between P1 (0.08 ± 0.08 μg/mL) and P7 (0.30 ± 0.30 μg/mL) ($$p \leq 0.70$$). There were no significant differences between SED and EX at any specific time point on Day 0 (Table 2). Day 28: In SED horses receiving the SUPP diet, there was no significant change in SAA at any time between P1 (0.17 ± 0.17 μg/mL) and P7 (0.35 ± 0.15 μg/mL) ($$p \leq 0.59$$). There was also no significant change in EX horses in SAA between P1 (0.35 ± 0.25 μg/mL) and P7 (1.00 ± 0.53 μg/mL) ($$p \leq 0.96$$). Overall, SAA was significantly higher in EX than in SED horses ($$p \leq 0.02$$), but there were no significant differences between groups at specific time points (Table 2). Day 0: In SED horses receiving the SUPP diet, there was no significant change in LPS at any time between P1 (2.15 ± 0.04 pg/mL) and P7 (2.17 ± 0.04 pg/mL) ($$p \leq 0.91$$). There was also no significant change in EX horses LPS between P1 (2.06 ± 0.04 pg/mL) and P7 (2.13 ± 0.01 pg/mL) ($$p \leq 0.98$$). LPS was significantly higher in SED than EX horses ($$p \leq 0.03$$), but there were no significant differences between groups at specific time points (Table 2). Day 28: In SED horses receiving the SUPP diet, there was no significant change in LPS at any time between P1 (2.20 ± 0.08 pg/mL) and P7 (2.18 ± 0.07 pg/mL) ($$p \leq 0.90$$). There was also no significant change in EX horses in LPS between P1 (2.06 ± 0.04 pg/mL) and P7 (2.06 ± 0.05 pg/mL) ($$p \leq 0.97$$). LPS was significantly higher in SED than EX horses overall ($p \leq 0.001$), as well as at P5 ($$p \leq 0.01$$) and P6 ($$p \leq 0.05$$) (Table 2). ## Day 0 and Day 28 in Supplemented and Control Diets On Day 0, there were no differences in SAA between SUPP and CO horses ($$p \leq 0.257$$). Overall, SAA was significantly higher in EX than SED horses ($$p \leq 0.015$$), primarily owing to significantly higher SAA in EX than SED horses in CO horses ($$p \leq 0.002$$) that was not observed in SUPP horses ($$p \leq 0.826$$) (Table 2). On Day 28, SAA was significantly higher overall in SUPP horses compared with CO horses ($$p \leq 0.01$$). There was no significant difference in SAA between EX and SED horses overall, but SAA was significantly higher in SED horses than EX horses in horses receiving the supplemented diet ($$p \leq 0.05$$) (Table 2). On Day 0, there were no differences in LPS between SUPP and CO horses ($$p \leq 0.346$$). There was also no significant difference between EX and SED horses overall ($$p \leq 0.268$$). LPS was significantly higher in EX than SED horses in the CO group ($$p \leq 0.003$$), but there were no significant differences in LPS between EX and SED horses in the SUPP group ($$p \leq 0.068$$) (Table 2). On Day 28, there were no differences in LPS between SUPP and CO horses ($$p \leq 0.674$$). There was also no significant difference between EX and SED horses overall ($$p \leq 0.392$$). LPS was significantly higher in EX than SED horses in the CO group ($$p \leq 0.004$$) and significantly lower in EX than SED in the SUPP group ($p \leq 0.001$) (Table 2). ## 4. Discussion The purpose of the current study was to quantify the effect of a dietary A. oryzae prebiotic on gastrointestinal permeability in horses challenged with combined transport and exercise stress. The main finding was that 28 days of supplementation with the A. oryzae prebiotic completely eradicated stress-induced gastrointestinal permeability in this group of horses. We have previously demonstrated that the combination of transport and exercise stress model utilized in the current study produces gastrointestinal hyperpermeability and an increase in blood biomarkers that evidence transient, low-grade systemic inflammation [7]. Like our previous study, we report herein that 60 min of trailer transport immediately preceding half an hour of moderate-intensity exercise is a clear, reproducible model of gastrointestinal hyperpermeability. On Day 0 for both feeding groups, the stress model resulted in a significant uptick in the systemic appearance of orally administered iohexol that was not seen in unstressed controls. That this spike in the systemic appearance of iohexol was absent in stressed horses in the SUPP feeding group on Day 28 provides strong evidence for the role of A. oryzae prebiotic in protecting gastrointestinal barrier function in horses during stress. The mechanism for this blockade is not known but may be associated with an effect of A. oryzae prebiotic on the enteric microbiome. A. oryzae strongly increases the relative abundance of anti-inflammatory bacterial strains such as Bifidobacterium [18,24] and important fiber-degrading bacteria such as Ruminococcaceae [19]. Dietary provision of Bifidobacterium-based probiotics to obese humans results in a marked decrease in gastrointestinal hyperpermeability [25], which provides support for the hypothesis that A. oryzae prebiotic protects the enteric barrier from stress-induced hyperpermeability via its modulation of the gastrointestinal microbiome. This hypothesis should be tested in future studies. When dietary groups were combined, there was an overall increase in SAA in response to our stress challenge, consistent with our previous study [7], but this effect was not observed when analyzing dietary groups individually. SAA is the major acute phase protein in the horse. While it is a highly sensitive indicator of an inflammatory event, it is not specific, and its production can be markedly increased in the presence of almost any inflammatory challenge [26]. The vast majority of SAA is produced by hepatocytes, but small amounts may also be produced by enterocytes [27]. Our small sample size, together with SAA fluctuations in both EX and SED groups that were unrelated to our stress challenge, likely contributed to the lack of statistical increase in SAA within groups. Consequently, the effect of A. oryzae prebiotic on this biomarker remains unknown. Owing to the highly plastic nature of SAA in vivo, future studies to evaluate the effects of the A. oryzae prebiotic on this outcome measure may benefit from controlled in vitro assessment of enterocyte-specific production of SAA [27]. The marked gastrointestinal hyperpermeability that was observed in the current study in EX horses in the control feeding group on Days 0 and 28 was not associated with a significant time-dependent increase in circulating LPS, and like SAA, this may have been due, at least in part, to our small sample size. But the overall serum LPS concentration of EX horses was significantly higher than SED horses. Surprisingly, however, serum LPS was significantly lower in EX than in SED horses for the A. oryzae feeding group. This result is probably not associated with the supplement because it was observed both on Day 0 (prior to beginning supplementation) and on Day 28, so instead is more likely an artifact of randomizing a small number of animals to the feeding groups. Furthermore, our maximum LPS concentration of 2.24 pg/mL in either feeding group is well within the reference interval for the normal flux of systemic LPS in healthy horses [26]. Future studies designed to detect the effect of the dietary A. oryzae prebiotic on the translocation of enteric LPS at levels expected to be associated with disease will require a stronger stress challenge such as non-steroidal anti-inflammatory drugs [2,27]. The current study had fewer animals in each treatment group than our previous study, which may have resulted in the current study being underpowered to detect the effects of stress and/or diet on SAA and LPS. ## 5. Conclusions In conclusion, the data presented herein provide compelling evidence for a protective effect of A. oryzae prebiotic on stress-induced gastrointestinal hyperpermeability. This supplement may be a useful dietary ingredient for horses undergoing combined transport and exercise stress as a prevention for gastrointestinal hyperpermeability. Future studies should explore the effects of A. oryzae prebiotic on the equine gastrointestinal microbiome as a potential mode of action.
# The Gut Microbiota of Young Asian Elephants with Different Milk-Containing Diets ## Abstract ### Simple Summary Insufficient maternal milk is one of the important reasons for the low survival rate of young Asian elephants. Finding the optimal break milk supplementation for young Asian elephants is a matter of urgency. In our study, we investigated the microbiomes of young Asian elephants on different milk-containing diets (elephant milk only, elephant milk–plant mixed feed, and goat milk–plant mixed feed). Our results suggested that goat milk is not suitable for young elephants, and yak milk may be an ideal source of supplemental milk for Asian elephants. ### Abstract Evaluating the association between milk-containing diets and the microbiomes of young Asian elephants could assist establishing optimal breast milk supplementation to improve offspring survival rates. The microbiomes of young Asian elephants on different milk-containing diets (elephant milk only, elephant milk–plant mixed feed, and goat milk–plant mixed feed) were investigated using high-throughput sequencing of 16S rRNA genes and phylogenetic analysis. Microbial diversity was lower in the elephant milk-only diet group, with a high abundance of Proteobacteria compared to the mixed-feed diet groups. Firmicutes and Bacteroidetes were dominant in all groups. Spirochaetae, Lachnospiraceae, and Rikenellaceae were abundant in the elephant milk–plant mixed-feed diet group, and Prevotellaceae was abundant in the goat milk–plant mixed-feed diet group. Membrane transport and cell motility metabolic pathways were significantly enriched in the elephant milk–plant mixed-feed diet group, whereas amino acid metabolism and signal transduction pathways were significantly enriched in the goat milk–plant mixed-feed diet group. The intestinal microbial community composition and associated functions varied significantly between diets. The results suggest that goat milk is not suitable for young elephants. Furthermore, we provide new research methods and directions regarding milk source evaluation to improve elephant survival, wellbeing, and conservation. ## 1. Introduction The Asian elephant (Elephas maximus) is a large phytophagous mammal that is mainly found in the Xishuangbanna region of Yunnan Province, China, south of 24.6° north latitude, and in parts of south and southeast Asia [1]. The Asian elephant is a Class I protected wildlife species in China and is listed as endangered by the International Union for Conservation of Nature Red List of Threatened Species™ [2,3]. Furthermore, these elephants are in Appendix I of the Convention on International Trade in Endangered Species of Wild Fauna and Flora [4]. There are only approximately 300 Asian elephants left in China [5]. Although the Asian elephant population has rebounded after years of effort, its survival rate still requires improvement. Approximately $25.6\%$ of elephant calves in Myanmar reportedly die before they reach 5 years of age, with a quarter of these deaths attributed to insufficient maternal milk or the inability of the calves to receive the milk properly [6]. Similarly, in wild African elephant populations, an average of $19\%$ of young elephants die before 5 years of age, with a proportion of these deaths attributed to maternal difficulties regarding meeting nursing needs [7]. During droughts, maternal elephants struggle to maintain milk production, when the metabolic demands of young male elephants are greater, making it difficult for maternal elephants to meet their needs. Thus, young male elephants are more likely to die [8]. A major reason for the high mortality rate of elephant calves in zoos, especially in Asia, is the refusal of mothers to nurse their young, resulting in the need for manual intervention to feed the young [9,10]. Inadequate maternal milk in Asian elephants results in the poor survival rate of young elephants, and currently, staff at the Xishuangbanna Asian Elephant Sanctuary are using goat milk to supplement the feeding of rescued infants and young elephants. The large number of microbial communities present in the gastrointestinal tract of animals constitute the microbiota, which contribute to host nutrient acquisition and immune regulation [11,12] and assist in maintaining host homeostasis in response to environmental changes [13,14,15]. Diet, especially early nutrition, influences the composition and metabolic activity of the gut microbial community and is a key factor in the growth and healthy development of newborn elephants [16,17]. Breastfeeding is considered an influential driver of the gut microbiota composition during infancy, potentially affecting the function thereof [18]. The gut microbiota early in life is associated with physiological development, and early gut microbiota is involved in a range of host biological processes, particularly immunity, cognitive neurodevelopment, metabolism, and infant health [19,20]. Early foods can promote the survival rate of infant and young elephants; therefore, it is vital to study the effects of different foods, especially different kinds of milk on the gut microbiota of infant and young elephants. In this study, the gut microbiota composition and function of young elephants fed an elephant milk-only diet, elephant milk–plant mixed-feed diet, and goat milk–plant mixed-feed diet were analyzed using 16S rRNA gene high-throughput sequencing technology. Although there have been studies regarding the use of non-breast milk dairy products for feeding endangered wildlife (e.g., Siberian tigers [16]), only few studies on the gut microbiota of Asian elephants on diets containing goat milk exist. To the best of our knowledge, this study is the first to describe the composition and function of the gut microbiota of young elephants fed a goat milk diet. ## 2.1. Fecal Sample Collection In March 2019, we collected fresh feces from eight young Asian elephants with different milk-containing diets at Wild Elephant Valley in Xishuangbanna: three in the elephant milk diet-only group (BF1, BF2, and BF2; they are healthy, aged about 6 months, and can freely shuttle below the abdomen of adult female elephants); three in the elephant milk-plant mixed feeding group (BPM1, BPM2, and BPM3; they are healthy, more than one year old, and tall to the base of the forelegs of adult female elephants); and two in the goat milk–plant mixed feeding group (GPM1 and GPM2; they are healthy, more than three year old, and height slightly higher than the previous group). The detailed sampling method was as follows [21]: young elephants were accompanied by the breeder until defecation, samples were collected immediately from the center of fresh feces with sterile tweezers, placed in sterile centrifuge tubes, and stored in liquid nitrogen. Samples were transported in liquid nitrogen, and then stored at −80 °C until DNA extraction. ## 2.2. Genomic DNA Extraction, Gene Amplification and High-Throughput Sequencing *Microbial* genetic DNA was extracted from eight fecal samples using the EZNA® Soil DNA Kit (Omega, GA, USA) following the steps in the kit instructions. DNA quality and quantity were assessed using a $1\%$ agarose gel and a NanoDrop 2000 spectrophotometer (Thermo Scientific, Wilmington, DE, USA). The hypervariable region V3-V4 of the bacterial 16S rRNA gene was amplified with the primer pair 338F (5′-ACTCCTACGGGAGGCAGCAG-3′) and 806R (5′-GGACTACHVGGGTWTCTAAT-3′) using an ABI GeneAmp 9700 PCR thermal cycler (Appliedbiosystems, Foster City, CA, USA). The PCR mix consisted of 4 μL of 5× TransStart FastPfu buffer, 2 μL of 2.5 mM dNTP, 0.8 μL each of 5 μM forward and reverse primers, 0.4 μL of TransStart FastPfu DNA polymerase, 10 ng of template DNA and ddH2O up to 20 μL. PCR amplification was performed in triplicate under the following conditions: 95 °C for 3 min, followed by 30 cycles of 95 °C for 30 s, 55 °C for 30 s, and 72 °C for 45 s, and a final extension at 72 °C for 10 min. Purified amplicons were pooled in equimolar aliquots and then sequenced on the Illumina MiSeq platform (Illumina, San Diego, CA, USA) to obtain paired-end reads [22]. ## 2.3. Sequencing Data Processing Raw 16S rRNA gene sequencing reads were demultiplexed and quality-filtered using fastp version 0.20.0 [23] and then merged using FLASH version 1.2.7 [24]. Stringent criteria were established for quality. Three hundred-base pair reads were truncated at any site that received an average quality score <20 over a 50 bp sliding window. Truncated reads shorter than 50 bp and reads with ambiguous characters were discarded. Sequences required an overlap larger than 10 bp for assembly, and the maximum mismatch ratio of the overlap region was 0.2. Reads that could not be assembled were discarded. Samples were distinguished by barcodes and primers, and the sequence direction was adjusted accordingly. Exact barcode matching was required, and a mismatch of two nucleotides in primer matching was allowed. Operational taxonomic units (OTUs) with a $97\%$ similarity cutoff [25,26] were clustered using UPARSE version 7.1 [25]; chimeric sequences were identified and removed. Taxon assignments for each representative OTU sequence were determined using RDP Classifier version 2.2 [27] with the 16S rRNA gene database (Silva v138) with a confidence threshold of 0.7. ## 2.4. Data Analysis and Statistical Methods To investigate the similarity and difference relationship of microbial community structure among different milk-containing diet groups, sample-level clustering analysis was performed using UPGMA method based on the average Bray_curtis distance matrix among groups. Alpha diversity indices including Chao1 index, Shannon index, and Pielou index were calculated using software mothur (version 1.30.2, http://www.mothur.org/wiki/Schloss_SOP#Alpha_diversity, accessed on 23 April 2019), and difference tests between multiple groups were performed using Welch’s t-test. The Kruskal–Wallis H test was applied to detect species that exhibited differences in abundance in the microbial communities between groups. In addition, functional prediction results were obtained using PICRUSt2, and the difference significance was detected using the Kruskal–Wallis H test. ## 3.1. Unweighted Pair Group Method with Arithmetic Mean Hierarchical Clustering Analysis At the family and genus levels, the samples were analyzed using hierarchical clustering based on the unweighted pair group method with arithmetic mean (UPGMA) cluster analysis method (Figure 1), which indicated that the samples were clearly clustered into two groups: the elephant milk diet group (BF1, BF2, and BF2) and the milk–plant mixed-feed diet group (remaining samples). The milk–plant mixed-feed diet group was clearly further divided into two groups according to the type of supplemented milk: the elephant milk–plant mixed-feed diet group (BPM1, BPM2, and BPM3) and the goat milk–plant mixed-feed diet group (GPM1 and GPM2). These results exhibited that the gut microbial community composition of young elephants in the elephant milk-only diet group and that of young elephants in the milk–plant mixed-feed diet groups differed clearly. Moreover, the gut microbial community composition of young elephants in the elephant milk-only diet group and that of young elephants in the goat milk–plant mixed-feed diet group also differed significantly. ## 3.2. Alpha Diversity Analysis An α-diversity test was performed to evaluate the differences in the gut microbial community between the three groups at the family level (Figure 2). Consequently, the richness index (Chao1) and diversity index (Shannon) were significantly different between the three groups ($p \leq 0.05$, Figure 2A,B). The richness and diversity indices of the milk–plant mixed-feed diet groups were significantly higher than those of the elephant milk-only diet group ($p \leq 0.05$), which was consistent with the richness of dietary diversity in the milk–plant mixed-feed diet groups. In addition, the Shannon and Pielou indices were significantly higher in the elephant milk–plant mixed-feed diet group than in the goat milk–plant mixed-feed diet group ($p \leq 0.05$, Figure 2B,C). These findings suggested that supplementation with elephant milk in young elephants resulted in a more diverse and homogeneous gut bacterial community than supplementation with goat milk, and supplementation with goat milk may lead to a highly dominant bacterial taxon in the gut environment of young elephants. ## 3.3. Community Composition Firmicutes and Bacteroidetes represented the dominant phyla in young elephant guts, which was consistent with the dominant phyla in the gut microbiota of adult Asian elephants (Figure 3) [28]. The young elephant intestinal microbiota in the elephant milk-only diet group (BF1, BF2, and BF3) contained a high abundance of Proteobacteria, averaging around approximately $17.3\%$ (Figure 3). The elephant milk–plant mixed-feed diet group (BPM1, BPM2, and BPM3) had a higher abundance of Spirochaetae (approximately $8.8\%$), Fibrobacteria (approximately $3.8\%$), and Verrucomicrobia (approximately $3.6\%$) compared to the elephant milk-only diet group (Figure 3). The BPM1 group had a relatively higher intake of elephant milk and, correspondingly, higher abundance of Proteobacteria, while BPM2 and BPM3, which had lower intakes of elephant milk, had an extremely low abundance of Proteobacteria, indicating that elephant milk is closely related to Proteobacteria levels. The goat milk–plant mixed-feed diet group (GPM1 and GPM2) contained nearly no Proteobacteria, Spirochaetae, and Fibrobacteria (Figure 3), and the considerably low abundance of Proteobacteria indicated that elephant milk is closely related to the abundance of this bacterium. In addition, Synergistetes were abundant in the intestinal microbiota of young elephants in the goat milk–plant mixed-feed diet group compared to the other groups (Figure 3). At the family level, the intestinal bacteria of young elephants in the elephant milk-only diet group consisted mainly of Bacteroidaceae, Enterobacteriaceae, Ruminococcaceae, and Lachnospiraceae, accounting for >$75\%$ of intestinal bacteria (Figure 1A). The intestinal bacteria of young elephants in the elephant milk–plant mixed-feed diet group consisted mainly of Lachnospiraceae, Ruminococcaceae, Rikenellaceae, Spirochaetaceae, and Prevotellaceae, accounting for >$70\%$ of intestinal bacteria (Figure 1A). BPM1, who consumed a large amount of elephant milk, had an abundance of Enterobacteriaceae, suggesting that Enterobacteriaceae levels are closely related to the elephant milk consumed by young elephants. The intestinal bacteria of young elephants in the goat milk–plant mixed-feed diet group consisted mainly of Ruminococcaceae, Lachnospiraceae, Prevotellaceae, and Synergistaceae, accounting for approximately $60\%$ of intestinal bacteria (Figure 1A). ## 3.4. Differential Microbiota Analysis At the family level, differential microbiota analysis of young elephants (Figure 4) revealed that Rikenellaceae, Spirochaetaceae, Fibrobacteraceae, and Bacteroidales_UCG-001 were significantly enriched in the elephant milk–plant mixed-feed diet group ($p \leq 0.05$). These bacterial taxa belong to the lignocellulose-degrading bacterial phyla commonly encountered in the gastrointestinal tracts of animals, such as Bacteroidetes, Spirochaetes, and Fibrobacteres, suggesting that elephant milk enriches lignocellulose-digesting bacterial groups in the intestinal tract of young elephants, facilitating the transition from an elephant milk diet to a plant-based diet. Prevotellaceae, Synergistaceae, and Christensenellaceae were significantly enriched in the goat milk–plant mixed-feed diet group ($p \leq 0.05$). This indicated that there was a significant difference in the effect of elephant and goat milk supplementation in the diet on the intestinal microbiota of young elephants. ## 3.5. Function Predictive Analysis Predictive analysis of the intestinal microbiota function in young elephants revealed differences in microbial community functions between different milk-containing diet groups (Figure 5). Carbohydrate and cofactor metabolism, vitamins, and glycan biosynthesis and metabolism were significantly more enriched in the elephant milk-only diet group compared to the mixed-feed diet group ($$p \leq 0.044$$). These function enrichments were beneficial to infant elephant growth and development. The enrichment of nucleotide metabolism ($$p \leq 0.044$$) and biosynthesis of other secondary metabolites ($$p \leq 0.044$$) were significantly higher in the goat milk–plant mixed-feed diet group compared to that of the elephant milk-only diet group, indicating that secondary metabolic pathways occurred during food digestion in the goat milk–plant mixed-feed diet group. The other amino acid metabolic ($$p \leq 0.030$$), transformation ($$p \leq 0.046$$), transcriptional ($$p \leq 0.020$$), replication and repair ($$p \leq 0.030$$), endocrine system ($$p \leq 0.044$$), and cell growth and death metabolic ($$p \leq 0.030$$) pathways were also significantly more enriched in the elephant milk–plant mixed-feed diet group than in the elephant milk-only diet group. The significant enrichment of these functions reflected strong metabolism and good growth and development of the young elephants in this group, indicating that the elephant milk–plant mixed-feed diet promoted the transition of young elephants from an elephant milk-based diet to a plant-based diet. In the elephant milk–plant mixed-feed diet group, enrichment of the membrane transport pathway ($$p \leq 0.044$$) and cell motility pathway ($$p \leq 0.044$$) was significantly higher in the elephant milk–plant mixed-feed diet group than in the goat milk–plant mixed-feed diet group. Meanwhile, the energy metabolic ($$p \leq 0.044$$), amino acid metabolic ($$p \leq 0.044$$), and signal transduction ($$p \leq 0.025$$) pathways were significantly more enriched in the goat milk–plant mixed-feed diet group than in the elephant milk–plant mixed-feed diet group. These results suggested that supplementation of the host’s diet with milk from different sources led to changes in the functional structure of the gut microbiota in Asian elephants. ## 3.6. Composition Comparison of Different Kinds of Milk There were significant differences in the composition and function of the gut microbiota between the elephant milk diet groups and the goat milk diet group of young elephants (Figure 4 and Figure 5). Moreover, there was a close correlation between the host’s diet and their gut microbiota [29,30], where diet may have represented the main reason for these differences. Previous studies have shown significant differences in the nutrient composition of Asian elephant milk [6,10,31,32] compared to goat milk [33,34]. In the Asian elephant milk, the total solids (17.56–$19.60\%$), protein (3.30–$5.23\%$), and milk fat (7.70–$8.30\%$) content were significantly higher than those in the goat milk (11.53–$13.00\%$, 3.17–$3.75\%$, and 3.95–$4.25\%$ for total solids, protein, and fat contents, respectively), while the water content (81.90–$82.44\%$) was significantly lower than that of the goat milk ($88.00\%$) (Table 1). The differences in the gut microbiota composition and function between the mixed-feed diet groups in this study may be mainly due to the differences in the nutrient composition and content between elephant milk and goat milk. Comparisons of the nutrient composition and content of different kinds of milk and Asian elephant milk have been conducted in previous studies [35,36,37,38]. The nutritional composition and content of yak milk [35,36] was similar to that of Asian elephant milk (Table 1). Water, total solids, protein, milk fat, ash, and lactose accounted for $83.74\%$, 16.60–$18.52\%$, 4.68–$5.41\%$, 6.72–$8.18\%$, 0.72–$1.19\%$, and 4.40–$5.10\%$ of yak milk, respectively (Table 1). Although there has been no study on the intestinal microbiota of Asian elephants supplemented with yak milk, the similarity between the composition and content of yak milk and Asian elephant milk suggests that yak milk may represent a viable choice of milk compared to goat milk for the supplementation of rescued young Asian elephants. ## 4. Discussion Asian elephants are endangered wild animals, and there are few milk-drinking young elephants. Although the number of samples in each group in this study is insufficient, this is all the samples that could be collected in Xishuangbanna region at that time. Here, the diversity of gut microbial communities of young elephants differed significantly between different milk-based diet groups, reflecting the various effects that these diets may have on the growth and development of young elephants. The richness (Chao 1 index) and diversity (Shannon index) of human intestinal microbiota are crucial indicators of health [39]. Claesson et al. [ 40] reported that preterm infants with necrotizing colitis have a significantly lower diversity of fecal microbiota compared to those without the disease, and young children with lower gut microbiota diversity are at higher risk of developing allergic diseases later in life. Thus, the greater the gut microbiota richness and diversity, the more likely it is that the nutritional status and health of the host will be good. In this study, the elephant milk–plant mixed-feed diet group had higher intestinal microbiota diversity compared to the goat milk–plant mixed-feed diet group; therefore, although it is feasible to feed goat milk to young elephants, these results suggest that more suitable milk sources should be identified to serve as appropriate elephant milk supplementation for Asian elephants. Firmicutes and Bacteroidetes were the dominant phyla in all three groups, which is consistent with the results of Ilmberger et al. [ 41], and are also the dominant phyla in the adult Asian elephant gut microbiota [21]. Intestinal Firmicutes have many genes encoding fermentable dietary fiber proteins, which can also interact with the intestinal mucosa, contributing to the stability of the host’s internal environment [42]. Bacteroidetes are the main drivers of plant biomass degradation in Asian elephants [21,28,41]. These two bacterial taxa are indispensable for Asian elephants, as they assist plant digestion for energy acquisition. In the goat milk–plant mixed-feed diet group, the dominant phyla in the gut remained Firmicutes and Bacteroidetes, indicating that the use of goat milk to feed young Asian elephants could maintain the stability of the dominant phyla in the intestinal microbiota, allowing digestion and energy acquisition from food. The abundance of Spirochaetae in the intestinal microbiota of young Asian elephants was higher in the elephant milk–plant mixed-feed diet group compared with the goat milk–plant mixed-feed diet group. Spirochaetae are associated with the cell motility pathway, which is required by intestinal microbiota to actively contact their substrates and facilitate the biochemical reactions of the substrates [43,44]. This suggests that goat milk is not the most suitable supplement for elephant milk. In addition, Lachnospiraceae were more abundant in young Asian elephants in the elephant milk–plant mixed-feed diet group compared to in the goat milk–plant mixed-feed diet group, and are closely associated with host mucosal integrity, bile acid metabolism, and polysaccharide catabolism [45]. The low Lachnospiraceae abundance in the goat milk–plant mixed–feed diet group further suggested that goat milk may not be the best choice for feeding young Asian elephants. The abundance of Prevotellaceae and Rikenellaceae was higher in the mixed-feed diet groups than in the elephant milk-only diet group. A low abundance of Rikenellaceae and a high abundance of Prevotellaceae have been associated with obesity [46,47]. Therefore, the lower abundance of Rikenellaceae and higher abundance of Prevotellaceae in the goat milk–plant mixed-feed diet group compared to the elephant milk–plant mixed-feed diet group suggest that goat milk–plant mixed feeding may cause obesity in Asian elephants. In turn, this could lead to a potential risk of obesity-related diseases in Asian elephants. Synergistaceae encode multiple pathways that may be associated with the metabolism of diet-generated compounds [48], and these are predicted to be key factors in dietary detoxification in herbivores. In this study, Synergistaceae were significantly enriched in the goat milk–plant mixed-feed diet group, which was consistent with the significant enrichment of biosynthesis of other secondary metabolites in this group. This was likely due to the excess of secondary metabolism occurring during food digestion in this group. Meanwhile, the reason behind excess secondary metabolism, caused by the supplementation of goat milk or the presence of specific components in the foraged plants, requires further elucidation. Recent studies on the relationship between breast milk and the gut microbiota have revealed a correlation between milk composition and gut microbiota in infants [31], and that milk composition varies by mammalian species [49,50]. The composition and content of Asian elephant [5,10,31,32] and goat milk [33,34] differ significantly. Asian elephant milk is richer in nutrients than goat milk, which may have been the main reason for the difference in the composition and function of the gut microbiota between the elephant milk–plant mixed-feed diet group and the goat milk–plant mixed-feed diet group. Nutrient composition analysis and the content of yak milk [35,36] indicates that it is similar to Asian elephant milk. Furthermore, through the study of yak milk on retinoic acid-induced osteoporosis in mice, it was found that yak milk could improve bone quality and microstructure to promote bone health [51]. The study of Zhang Wei et al. showed that yak milk could improve endurance capacity and relieve fatigue [52]. It is reported that yak dairy products seem to be particularly rich in functional and bioactive ingredients, which may play a role in maintaining the health of nomadic peoples [53]. Nutritional composition analysis of yak milk and its advantages in other animals suggested that yak milk may be an ideal source of supplemental milk for Asian elephants, compared to goat milk. ## 5. Conclusions By studying the gut microbiome of Asian elephants on different milk-containing diets, it revealed the fact that the diet supplemented with goat milk diet seems not to be the most indicated to young elephants, and the composition and function of the gut microbiota of young elephants on a supplemented goat milk diet were also revealed for the first time, which were compared with those on an elephant milk diet only and an elephant milk–plant mixed-feed diet. This study presents a breakthrough in a new research area, the gut microbiome, regarding the serious problem of a low survival rate of infant and young elephants due to insufficient breast milk. Furthermore, we demonstrate the importance of finding a more suitable supplemental or alternative source of breast milk for Asian elephants. We believe that, in the future, with the help of wildlife gut microbiome analysis, the best supplemental or alternative sources of milk can be identified for other endangered wildlife infants and young to enhance the wellbeing of wildlife and relieve the threat to survival caused by insufficient breast milk.
# Characteristics of circulating small noncoding RNAs in plasma and serum during human aging ## Abstract Human aging is associated with increased susceptibility to age‐related diseases due to alteration of biological processes. Here we identified changes in extracellular small noncoding RNA (sncRNA) expression with age from plasma and serum samples. A machine learning‐based aging clock was developed using age‐related sncRNAs and is capable of predicting individual age information. As a result of profiling the circulating sncRNA transcriptome we identified putative core biomarkers linked to the aging process. ### Objective Aging is a complicated process that triggers age‐related disease susceptibility through intercellular communication in the microenvironment. While the classic secretome of senescence‐associated secretory phenotype (SASP) including soluble factors, growth factors, and extracellular matrix remodeling enzymes are known to impact tissue homeostasis during the aging process, the effects of novel SASP components, extracellular small noncoding RNAs (sncRNAs), on human aging are not well established. ### Methods Here, by utilizing 446 small RNA‐seq samples from plasma and serum of healthy donors found in the Extracellular RNA (exRNA) *Atlas data* repository, we correlated linear and nonlinear features between circulating sncRNAs expression and age by the maximal information coefficient (MIC) relationship determination. Age predictors were generated by ensemble machine learning methods (Adaptive Boosting, Gradient Boosting, and Random Forest) and core age‐related sncRNAs were determined through weighted coefficients in machine learning models. Functional investigation was performed via target prediction of age‐related miRNAs. ### Results We observed the number of highly expressed transfer RNAs (tRNAs) and microRNAs (miRNAs) showed positive and negative associations with age respectively. Two‐variable (sncRNA expression and individual age) relationships were detected by MIC and sncRNAs‐based age predictors were established, resulting in a forecast performance where all R 2 values were greater than 0.96 and root‐mean‐square errors (RMSE) were less than 3.7 years in three ensemble machine learning methods. Furthermore, important age‐related sncRNAs were identified based on modeling and the biological pathways of age‐related miRNAs were characterized by their predicted targets, including multiple pathways in intercellular communication, cancer and immune regulation. ### Conclusion In summary, this study provides valuable insights into circulating sncRNAs expression dynamics during human aging and may lead to advanced understanding of age‐related sncRNAs functions with further elucidation. ## INTRODUCTION Heterogeneity of human lifespan and health outcomes occurs due to differential aging process. 1, 2, 3 Organismal aging is often accompanied by dysregulation of numerous cellular and molecular processes that triggers age‐related pathologies such as tissue degradation, 4 tissue fibrosis, 5 arthritis, 6 renal dysfunction, 7 diabetes, 8 and cancer. 9 The highly proactive secretome from senescent cells, termed the senescence‐associated secretory phenotype (SASP), is one of main drivers that cause age‐related pathogenesis through intercellular communication. 10 The classical SASP includes secretome of soluble factors, growth factors, and extracellular matrix remodeling enzymes, 11 and it can transmit age‐related information to the healthy cells via cell‐to‐cell contact. As one of the emerging SASP components protected by extracellular vesicles (EVs), ribonucleoprotein (RNP) complexes, and lipoproteins, 12 extracellular RNAs (exRNAs) are found in many biological fluids 13 and can bridge the communication between “donor” and “recipient” cells through endocytosis, inducing paracrine senescence and pro‐tumorigenic processes. 14, 15 Deep sequencing of human plasma exRNA revealed more than $80\%$ of sequencing reads mapped to small noncoding RNAs (sncRNAs) in human genome, including microRNAs (miRNAs), PIWI‐interacting RNAs (piRNAs), transfer RNAs (tRNAs), small nuclear RNAs (snRNAs), and small nucleolar RNAs (snoRNAs). 16 Extracellular miRNA expression in plasma of mice changes with age and cellular senescence can affect age‐related homeostasis throughout the body by circulating miRNA. 17 Other studies uncovered the roles of circulating miRNAs in age‐related dysfunction such as osteogenesis imperfecta, 18 decreased myelination, 19 tumorigenesis, 20 and cardiovascular disease. 21 However, the molecular function of other circulating sncRNAs in aging and age‐related diseases has been overlooked, and their expression profiles during human aging process must be further characterized. In this study, we determined the extracellular sncRNAs landscape during healthy human aging. Furthermore we generated an aging clock based on dynamic changes in extracellular sncRNAs and identified putative core sncRNAs with larger contribution weights in machine learning models for age‐related risks prediction. To achieve this, we used 446 pre‐selected small RNA‐seq data from plasma and serum samples (age: 20–99 years) and employed differential expression analysis and linear or nonlinear association measurements to determine age‐related sncRNAs as primary inputs for comprehensive machine learning modeling. Based on supervised machine learning models, aging estimators were created in high accuracy and sncRNAs candidates with top importance values in built models were considered as final age‐related biomarkers. Additionally, pathway enrichment of targets of core miRNAs strengthens our viewpoint that extracellular sncRNAs change with age‐related processes. ## Overview of integrated human small RNAs dataset To profile sncRNAs features during human healthy aging, we obtained small RNA‐seq datasets from the Extracellular RNA (exRNA) *Atlas data* repository (https://exrna‐atlas.org). 22 This work includes the studies for which information on age, health status, and gender, but only individuals having healthy aging process were retained for analysis. For datasets meeting the quality control standards established by the Extracellular RNA Communication Consortium (ERCC) (see experimental procedures), we created a bioinformatics procedure for reads mapping, processing, normalizing, categorizing, and modeling (Figure 1A). As a result of these criteria, 302 plasma and 144 serum samples (Figure 1B) were used in this study, with a similar number of samples representing each gender ranging from 20–99 years old (Figure 1C, Table S1). As these datasets originate from distinct studies with multiple sampling and library preparations, there are clear batch effects after Counts Per Million (CPM) normalization (Figure S1A,B). The ComBat function from the R package sva (v3.40.0) in Bioconductor 23 was employed to reduce or eliminate batch effect that may deviate from actual cross‐study results (Figure S1C,D). These corrected data were used for correlation measurements and machine learning training described below. **FIGURE 1:** *Identifying practical computational models of healthy aging via plasma and serum small noncoding RNAs (sncRNAs). (A) Flow chart of data preprocessing, normalizing, batch effect correcting, and analyses of 446 blood samples. (B) Proportion of plasma and serum samples from healthy donors. (C) Distribution of age and gender in plasma and serum* ## Identification of expressed sncRNAs in plasma and serum To determine sncRNAs expressed during aging, we considered sncRNAs with ≥1 CPM in at least $30\%$ of individuals within an age group (young (20–30), adult (31–60), and aged (61+) groups) as expressed sncRNAs. As a result, there were 7953 and 6476 sncRNAs observed in plasma and serum samples respectively (Figure 1A). Further, we identified highly expressed sncRNAs by increasing minimal CPM to 10, resulting in 1243 and 1139 sncRNAs retained in plasma and serum samples respectively (Figure 1A, Table S2). In terms of distribution of sncRNAs subtypes in three age groups, miRNAs account for a high proportion ($26.5\%$–$63.4\%$) of all sncRNAs in both plasma and serum, and their abundance consistently decreased with age (Figure 2A,B). tRNAs increased and became the dominant sncRNA in aged group while expression of miRNAs were reduced in older individuals (Figure 2A,B). The corresponding mapped reads are proportional to the number of each highly expressed subtype, even though miRNA showed relatively more sequencing reads than others in both plasma and serum (Figure 2C,D). **FIGURE 2:** *Highly expressed sncRNAs in plasma and serum. Subtype distribution of highly expressed sncRNAs, which meet the expression cutoff (≥10 CPM in ≥30% of samples) among young (20–30 years), adult (31–60 years), and aged individuals (≥61 years) in plasma (A) and serum (B). Total sequencing reads of highly expressed sncRNAs among three age groups in plasma (C) and serum (D)* ## Exploring the correlation between sncRNAs and human aging We calculated the maximum information coefficient (MIC) (D. N. 24) to investigate both linear and nonlinear associations between sncRNAs expression and corresponding individual age. By employing batch‐corrected data of expressed sncRNAs, we identified 364 and 1941 age‐related sncRNAs from plasma and serum respectively (Figure 3A,B, Table S3). Intriguingly, piRNAs became the most abundant sncRNAs in MIC measurement, with the number of snRNAs representing the second largest (Figure S2A,B). Similarly, the over‐represented biological processes of miRNA targets were identified, and cellular response and epigenetic modification were enriched in plasma (Figure 3C), while biosynthetic processes were significantly observed in serum samples (Figure 3D). **FIGURE 3:** *Identification of age‐related sncRNAs. MIC‐based age‐related sncRNAs in plasma (A) and serum (B), identified by both MIC and total information coefficient (TIC) values ≥0.7. Over‐representation analysis of biological process of MIC‐based age‐associated miRNAs targets in plasma (C) and serum (D) (p‐adjusted value <0.05)* ## Core feature selection of age‐related sncRNAs As the expression of sncRNAs changes with age, further data‐driven analysis was conducted to construct a human aging clock. MIC‐based age‐correlated sncRNAs were used as inputs to train regression models in plasma and serum samples. Compared to the linear models, such as Linear Regression (without feature selection) and Elastic Net (feature selection through regularization), the tree‐based ensemble machine learning methods (including Adaptive Boosting, Gradient Boosting, and Random Forest regressors) showed stronger power of prediction with better performance in accuracy (Figure 4) since its great capability of learning the underlying nonlinear patterns. With stably ideal performance in test subsets (Table S4), all models inputting age‐correlated sncRNAs (MIC_plasma and MIC_serum) accurately predicted the ages of corresponding individuals in test sets, with average R 2 values greater than 0.96, root mean squared error (RMSE) values less than 3.7 years and mean absolute error (MAE) values less than 1.9 years (Figure 4A–C). **FIGURE 4:** *Performance evaluation of sncRNAs based aging clocks built by linear regression, elastic net, Adaptive Boosting, Gradient Boosting, and Random Forest approaches. Summary of R 2 value (A), root mean squared error (RMSE) (B), and mean absolute error (MAE) (C). (D) Model fit based on plasma MIC‐based associated sncRNAs. (E) Model fit based on serum MIC‐based associated sncRNAs. All model fits were constructed using Adaptive Boosting method.* Due to the strong generalization ability in all ensemble learning methods, core sncRNAs associated with aging processes were determined by combined statistics and sum of importance ranks in the three methods was used as the criteria for core sncRNAs identification. As a result, there were 222 and 321 core sncRNAs overlapped in all three methods with MIC_plasma and MIC_serum as the inputs respectively (Table S5). Particularly, four snRNAs, three piRNAs, two small cytoplasmic RNAs, and one miRNA were identified as top core sncRNAs in plasma (Table 1). In serum samples, seven snRNAs, two tRNAs, and one small cytoplasmic RNA identified as top core sncRNAs in serum samples (Table 2). Notably, we also observed a gender‐specific model performance. When male‐only samples were used as training set for predicting female‐only test sets or vice versa, there were core sncRNAs unique to one gender (Figure S3A,B and Table S6), with slightly lower performance in R 2 and RMSE values compared to the models trained in gender‐mixed data (Figure S3C,D). ## Core miRNAs are involved in aging‐related processes To gain further insight into extracellular sncRNAs potential functions in a microenvironment, we focused on miRNAs, which are well characterized in post‐transcriptional gene regulation. The most ranked miRNA with the largest importance score in plasma and serum, hsa‐miR‐11,181‐3p and has‐miR‐7845‐5p (Table S5), were selected and their targets were separately predicted via the integration of eight miRNAs databases. The expressional profile of these two miRNAs in three age groups is in Figure S4 and corresponding targets are included in Table S7. As expected, these miRNA targets are enriched in canonical cell–cell communication pathways such as Sulfur relay system and Endocytosis pathways, as well as Immune development, Asthma and Ras signaling pathways that closely related to immune dysfunction and tumorigenesis during aging process (Figure 5A). **FIGURE 5:** *Top core miRNAs are associated with human aging and aging‐related disease. (A) KEGG pathway enrichment analysis of core miRNA targets. Pathway terms are ranked by combined score in Erichr. 73 (B) Interaction network among core miRNAs (in red), targets (in blue), and corresponding regulatory proteins (in purple). Only targets and interacted proteins have validated function in cell senescence, human aging, and longevity (information from HAGR) are shown* We also investigated the association between miRNA targets and protein coding genes previously validated in the human aging process from Human Aging Genomic Resources (HAGR), 25 and we found targets, including DDIT3, HLA‐DQA1, PTK2B, TTR, and YWHAG, were experimentally identified to be associated with cancer progression, senescence, aging, and longevity (Table S8). Based on protein–protein interaction enrichment analysis, these targets were demonstrated to have regulatory relationship with hallmark proteins, such as PIK3R1, STAT3, IL7R, and JAK2 (Figure 5B and Table S9), which have function in cancer, immune response, and intercellular transduction, bolstering the probability that other non‐miRNA sncRNAs also have functions in aging and aging‐related diseases. ## DISCUSSION Our study comprehensively profiled the relationship of extracellular sncRNAs with age in blood and built an aging clock of healthy individuals using sncRNAs linear and nonlinear correlated with age. Previously, age predictors were developed through DNA methylation sites, 26 transcriptome expression, 27, 28 repeat elements, 29 microRNAs, 2 and protein abundance. 30 This study provides the first detailed analysis of relationship between circulating sncRNAs and age based on regression models and core sncRNAs whose expression changes with age, allowing reliable age prediction. From previous human biofluids studies, differential composition of small RNA has been reported in multiple biofluids. Godoy et al. 31 used 12 normal human biofluids including plasma and serum in their study and for mapping reads of corresponding RNA sequencing (RNA‐seq), miRNA showed relative high fraction ($63.8906\%$, median) in adult plasma compared to serum ($36.0154\%$, median). However, the percentage of tRNA mapped reads in serum increased ($42.2067\%$, median) and became the most abundant RNA biotype, while median value was $0.7759\%$ in adult plasma. One study determined the diversity of small RNA in different biofluids, and tRNA showed the largest percentage of mapped reads ($39.7\%$) in serum compared to plasma ($5.8\%$) and whole blood ($2.1\%$). 32 Also, in the Max et al. study, 33 they characterized extracellular RNAs (exRNAs) from both plasma and serum samples of the same healthy volunteers, and interestingly they showed substantial differences of small RNA composition, with higher proportion of miRNA in plasma and more tRNA reads in serum. We have some serum and plasma samples from the same individuals (Table S1) and consistent results were observed (Figure 2). Max et al. 33 also concluded that different biofluid types, even though they come from the same origin, plasma and serum show significant variable that impact exRNA profile. One of the reasons is that additional absorption and continuous degradation of exRNAs by retained blood clot will reduce exRNA abundance. 33 So proper exRNA isolation is essential and immediate platelet and cell debris depletion for plasma collection may avoid losses of exRNA characteristics as much as possible. It is of interest to identify a detectable increase of highly expressed tRNAs in aged individuals, and it has been reported that spleen and brain had the highest tRNA expression, 34 which may indicate unique and differential biological process happen as individuals age. A previous report similarly finds tRNAs were the second most abundant sncRNAs in healthy adults (20–40 years) when small cytoplasmic RNA was not mentioned. 35 Unlike tRNAs driving protein synthesis, tRNA‐derived small RNAs (tsRNAs), including tRNA‐derived fragment (tRF) and stress‐induced tRNA halves (tiRNA), have been uncovered as aging process related sncRNAs. 36 Similar as human studies, the expression of tsRNAs increased during aging in Drosophila, 37 C. elegans, 38 and mouse brain cells. 39 Compared with healthy controls, differential expression of tsRNAs in age‐related diseases has been employed in disease prediction such as Alzheimer's disease and Parkinson's disease, 40 ischaemic stroke, 41 and osteoporosis. 42 tsRNAs have roles not only in potential biomarkers, but also in expressional regulation of age‐related mRNAs. 36 For example, 5′‐tRFTyr from tyrosine pre‐tRNA can silence PKM2, which is the inhibitor of p53, to cause p53‐dependent neuronal death. 43 The number of highly expressed miRNA in our study displayed a decreased tendency in older group, and it has been observed in both plasma and serum. Both core miRNAs identified by machine learning models were found to have reduced expression as age increased, similar to decreased expression of a majority of age‐associated miRNAs in whole‐blood, 2 serum, 44 and peripheral blood mononuclear cells. 45 It has been previously demonstrated that circulating sncRNAs from serum samples show strong association with human aging, 46 while the human aging modeling based on regression relationship was not yet built. In our study, potential function of core sncRNAs was predicted via miRNA target prediction, and these targets showed enrichment in cancer, cell cycle, and longevity regulating pathways. There are overlapping genes included in both cancer and longevity regulation pathways, and this result was consistent with early study that profiled miRNAs expression between young and old individuals. 45 For example, increased PIK3R1 expression has been identified to impair anti‐tumor effect through PI3K‐Akt activation in breast and ovarian cancer chemotherapy. 47, 48 Previous research determined that protein level of p85α, which is the subunit of PIK3R1, was elevated with age, and age‐associated miRNAs that potentially target PIK3R1 were downregulated. 45 Studies in human aging also show that sequence variations within PIK3R1 gene are significantly correlated with longevity, 49 and individuals with different genotypes of PIK3R1 were associated with longevity through reduced mortality risk in cardiovascular disease. 50 Interestingly, both core miRNAs (hsa‐miR‐11,181‐3p and has‐miR‐7845‐5p) that are potentially involved in PIK3R1 regulation (Figure 5B) showed lower expression in aged individuals (Figure S4). The hsa‐miR‐11,181‐3p has been used as biomarker for identification of glioma brain tumors from other brain tumor types. 51 By suppressing Wnt signaling inhibitor APC2, overexpression of hsa‐miR‐11,181‐3p can promote Wnt signaling pathway and increase cell viability in colon malignant tumor cell line. 52 For has‐miR‐7845‐5p, its expression in serum has been applied in constructing diagnostic classifier of ovarian cancer, 53 and higher expression was also observed in serum of patients with persistent atrial fibrillation. 54 Some direct targets of core miRNAs have been determined as drivers of age‐related process. For example, protein tyrosine kinase 2β (PTK2B) is a tyrosine kinase activated by angiotensin II through Ca2+‐dependent pathways to mediate ion channels as well as map kinase signaling pathway. 55 PTK2B is involved in cell growth, inflammatory response, and osmotic pressure regulation after activation and mutated PTK2B is statistically associated with hypertension in Japanese population. 56 PTK2B has also been reported in memory formation and corresponding protein variants can trigger cognitive dysfunction and higher prevalence of Alzheimer's disease. 57 *As a* nuclear protein that activated by DNA damage, DNA‐damage inducible transcript 3 (DDIT3) shows increased expression and prevents gene transcription by dimerizing with transcription factors. 58 Specifically, DDIT3 plays role in endoplasmic reticulum (ER) protein processing and resulted ER stress promotes cardiomyocyte senescence in mouse hearts. 59 The function of most of age‐associated sncRNAs identified in this study is unknown and further investigation into their function may provide meaningful results. We also observed the mild sex‐dependent differences in the aging clock modeling. Similarly, a previous study indicated that sncRNAs differences between genders were minor 33 and sex‐specific training sets have relatively low performance score in prediction compared to the gender‐mixed training sets. During this process, some gender‐dependent core sncRNAs were identified, including male‐specific sncRNAs piR‐31,143 and piR‐48,977 in plasma, male‐specific sncRNAs piR‐33,527 and piR‐57,256 in serum, female‐specific sncRNAs hsa‐miR‐3789 and U5‐L214 in plasma and female‐specific sncRNAs U6‐L989 and piR‐30,597 in serum (Table S6). Further mechanistic study is needed to uncover their prospective role in aging and aging‐related disease. A major limitation of our current study is the corresponding datasets utilized were developed by researchers for different, unique projects and with multiple RNA extraction protocols, which may bias extracellular RNA abundance. 35 Furthermore, trait information such as ethnicity, body mass, and smoking habits were not considered in our study due to the lack of information, and a more sophisticated and systematic sample processing and recording would help future research on big data‐based human aging modeling. In conclusion, we provide a novel insight into the circulating sncRNAs profile of human aging. We developed predictive models in uncovering core sncRNAs and estimated age by utilizing meta‐analysis based correlation measurement and machine learning modeling. The sncRNA dynamics with age provide valuable references for extracellular RNA study in aging, and the potential mechanisms of age‐related intercellular communication by sncRNAs need further investigation. ## Data acquisition and filtration Human small RNA‐Seq datasets in the extracellular RNA (exRNA) *Atlas data* repository (https://exrna‐atlas.org) 22 were queried with studies filtered using the following requirements: [1] data were sequenced from plasma/serum samples; [2] samples have definitive age and gender information within each study; and [3] the donor of corresponding samples should have a healthy status and was sampled as a control individual for the study. As a result, two studies (Accession ID: EXR‐MTEWA1ZR3Xg6‐AN and EXR‐TTUSC1gCrGDH‐AN) were included in both plasma and serum studies, and two studies (Accession ID: EXR‐TPATE1OqELFf‐AN and EXR‐KJENS1sPlvS2‐AN) were obtained with only plasma and serum samples respectively and 366 plasma and 188 serum samples passed preliminary filtration. To avoid genes' expressional bias due to the low sequencing reads and host genome contamination, we only retained samples that met the quality control (QC) standards developed by Extracellular RNA Communication Consortium (ERCC). Briefly, individual dataset should have a minimum of 100,000 reads that aligned to annotated RNA transcript (including miRNAs, piRNAs, tRNAs, snoRNAs, circular RNAs, protein coding genes, and long noncoding RNAs), and ratio of transcriptome reads over total sequencing reads should be more than 0.5. Consequently, 302 plasma and 144 serum samples (Table S1) were retained for further analysis. ## Quantification and batch effect removal *To* generate expression matrices of sncRNAs, read adaptors and low quality bases were removed using the Trim Galore (v0.6.5) wrapper. 60 Clean reads were aligned and quantified with bowtie2 (v2.4.4) 61 and samtools (v1.1.4) 62 through miRNAs and other sncRNAs annotation file from miRBase (Release 22.1) and the DASHR (v2.0) 63 database, respectively. The raw sncRNAs expression results were integrated and processed in R (v4.1.1) computational environment for identifying age‐related sncRNAs after preprocessing. To correct for actual expression characteristics masked by sequencing depth variability, gene read counts were transformed into CPM values after measuring normalized library sizes by edgeR (v3.14) package. 64 Since there were still obvious batch effects observed via principal component analysis (Figure S1), we conducted batch removal using the ComBat function in sva package (v3.40.0) 23 and processed CPM‐based data showed improved sample clustering by age (Figure S1). Batch‐effect corrected data were used for identifying maximum information coefficient and constructing machine learning models described below. ## Identification of association between sncRNAs and age To select the sncRNAs representative of the age prediction model, the maximal information coefficient (MIC), 24 which permits the identification of important, difficult‐to‐detect associations, 65 was used to identify and screen the linear or nonlinear correlations between each sncRNA expression (X) and the individual's chronological age (Y). Reshef et al. 24 reported that MIC − ρ2 to be near zero for linear relationships and MIC − ρ2 > 0.2 for nonlinear relationships, where ρ2 is the coefficient of determination (R 2). We also employed total information coefficient (TIC) to evaluate the power of independence testing between X and Y. 66 The sncRNAs having both MIC and TIC values greater than 0.7 with actual age were retained for building models. ## Comprehensive machine learning modeling The corrected expression data of sncRNAs selected from differential expression analysis and MIC‐based correlation measurement were used for machine learning modeling. Since sncRNAs expression inputs could be seen as the explanatory variable X, which is a high dimensional vector, the modeling process was performed as a regression analysis problem and was formularized as: [1] y=f^X where X denotes the sncRNA inputs, y denotes individual's age, and f^ denotes the fitted mapping function. Ensemble learning including Adaptive Boosting, Gradient Boosting, and Random Forest were leveraged in this study, taking advantage of their strong generalization ability achieved by multiple weak learners combination. 67 Based on manual parameter tuning, the parameter “number of estimators,” which is the number of weak learners (i.e., the regression tree in this study) to be integrated in model fitting, was determined in each specific model based on the overall performance (RMSE, R 2, and MAE, showed in Table S10). The performance of ensemble learning is compared with linear regression and elastic net. The corresponding importance of each sncRNA was calculated as impurity‐based feature score (sum to 1), which can be used to determine the fraction of sncRNA that it makes contribution to distinguish. 68 Potentially core sncRNAs were determined by sorting the corresponding sum of ranks of their importance values in each ensemble learning model. Since the number of samples is different in each age group (young, adult, and aged), simple k‐fold cross‐validation may cause uneven sampling and then trigger bad model performance due to over‐fitting. Therefore, stratified k‐fold cross‐validation is a better option to avoid this issue by selecting approximately the same proportions of samples in each pre‐set age group to the training set (Figure S5). In this study, we stratified fivefold cross‐validation based on the overall sample size. The regression modeling was conducted under Python 3.8.8 and scikit‐learn 0.24.1. 69 ## Targets prediction of age‐related miRNAs To better understand the potential function of circulating sncRNAs changing with age, we primarily predicted the targets of miRNA candidates by using multiMiR R package (V3.14), 70 which integrates eight microRNA‐target databases (DIANA‐microT, ElMMo, MicroCosm, miRanda, miRDB, PicTar, PITA, and TargetScan). ## Functional enrichment analyses Functional enrichment analyses of genes targeted by age‐related miRNAs performed through *Enrichr* gene list‐based enrichment analysis tool. 71 We used the combined score, which is a combination of the P value and z‐score, to offset the false positive rate caused by the different length of each term and input sets. For direct miRNAs functional enrichment, an over‐representation analysis was performed via miRNA Enrichment Analysis and Annotation Tool (miEAA 2.0), 72 with expressed miRNA sets as the background set and P values were adjusted using Benjamini‐Hochberg (BH) procedure. ## AUTHOR CONTRIBUTIONS PX performed the experiments and contributed to project design, data collection, execution of machine learning modeling and analysis, and manuscript writing. ZS and CL contributed to experimental design and execution of machine learning modeling and analysis. DEH contributed to data collection, analysis, and manuscript writing. ## FUNDING INFORMATION Not applicable. This research did not receive external funding. ## CONFLICT OF INTEREST The authors have no conflicts of interest to declare. ## DATA AVAILABILITY STATEMENT All of the small RNA‐Seq raw data (FASTQ) files and corresponding metadata are available directly from Extracellular RNA (exRNA) *Atlas data* repository with study ID (EXR‐MTEWA1ZR3Xg6‐AN, EXR‐TPATE1OqELFf‐AN, and EXR‐TTUSC1gCrGDH‐AN), or from the database of Genotypes and Phenotypes (dbGaP) with accession ID phs000727.v1.p1 for study EXR‐KJENS1sPlvS2‐AN.
# Bone health in ambulatory male patients with chronic obstructive airway disease – A case control study from India ## Abstract Chronic obstructive airway disease (COPD) is a multimorbid disorder with two thirds affected have at least one extra‐pulmonary complication. Bone health in COPD is least studied in developing nations and, in our study, we have reported that osteoporosis is twice more common in COPD than in healthy individuals and with a significant number demonstrating at least one parameter of adverse metabolic bone health on assessment. ### Objective Chronic obstructive airway disease (COPD) is characterized by airflow limitation due to airway and/or alveolar abnormalities with significant extra‐pulmonary manifestations. Bone health impairment is an extra‐pulmonary complication of COPD which is less well studied in India. Moreover, it can contribute to significant morbidity and mortality. Hence, we aim to estimate the prevalence of osteoporosis and metabolic parameters of adverse bone health in patients with COPD. ### Methods In this case control study, male subjects aged 40–70 years with COPD attending the respiratory outpatient clinic in a tertiary care hospital were recruited over a period of 2 years and the control population were derived from the historical cohort who were apparently healthy with no obvious diseases. Metabolic parameters of bone health measured from fasting blood samples were calcium, albumin, alkaline phosphatase, phosphorous, parathormone, creatinine, 25‐hydroxy vitamin D, and testosterone. Bone mineral density (BMD) was estimated using DXA scan and the World Health Organization (WHO) criteria was used to categorize into osteoporosis, osteopenia, and normal BMD based on the T‐score at femoral neck, lumbar spine and distal forearm. Pulmonary function tests and 6 minute walk test were performed if they had not been done in the previous 3 months. The associations of COPD with osteoporosis were analyzed using linear regression analysis and effect size are presented as beta with $95\%$ confidence interval. ### Results Of the 67 participants with COPD enrolled in the study, osteoporosis was present in $61\%$ ($\frac{41}{67}$) and osteopenia in an additional $33\%$ ($\frac{22}{67}$) of the cases, which was higher when compared to the control population (osteoporosis $20\%$ [$\frac{50}{252}$] and osteopenia $58\%$ [$\frac{146}{252}$]). In regression modeling, there was a trend toward adverse bone health with advanced age, low body mass index, low forced expiratory volume in 1 second and testosterone deficiency in COPD. ### Conclusion Individuals with COPD have a substantially higher prevalence of osteoporosis and osteopenia, up to almost twice that of the general population, with a significant number demonstrating at least one parameter of adverse metabolic bone health on assessment. Hence, bone health assessment should be a part of comprehensive COPD care to prevent adverse consequences due to poor bone health. ## INTRODUCTION Global Initiative for Chronic Obstructive Lung Disease (GOLD), defines chronic obstructive pulmonary disease (COPD) as a progressive disease characterized by persistent airflow limitation. 1 COPD is a preventable and treatable disease; however, it contributes to significant morbidity in affected individuals due to its pulmonary and extra‐pulmonary effects. The burden of COPD is steadily increasing both in developed and developing countries. The recent World Health Organization (WHO) report estimates that around 328 million people around the world are living with moderate to severe COPD and more than 3 million deaths in 2005 were attributed to COPD or its systemic complications. 2 This corresponds to $5\%$ of deaths reported globally, although this number may be higher given that $90\%$ of deaths occurred in developing countries where the reporting systems are suboptimal. COPD is the second leading cause of disease burden in India, contributing to $8.7\%$ of the total deaths and $4.8\%$ of the total disability adjusted life years (DALYs). 3, 4, 5 Death due to COPD is higher in male patients, and people with longer disease duration, frequent exacerbations, and significant extrapulmonary complications. 6 With advances in the treatment of COPD over the last 2 decades, people live longer, with more than two thirds affected by at least one extrapulmonary complication. 6, 7 Cardiovascular comorbidity is one of the most feared extra pulmonary complications, characterized by increased incidence of systemic and pulmonary arterial hypertension, congestive cardiac failure, and arrhythmias. 8 *In a* study by De Luise et al, there was a significant increase in the 30‐day mortality after a hip fracture in patients with COPD when compared with patients without COPD. 9 This additional risk extends well beyond the immediate postoperative period with the mortality rate reaching nearly three folds even after a year. Hence, non‐communicable diseases, like osteoporosis, has emerged to significantly contribute to the disease morbidity and mortality. The increased risk of osteoporosis in patients with COPD has been attributed to the systemic nature of the disease and its treatment, which requires glucocorticoids, especially with those with frequent exacerbation. 10 Major societal guidelines do not recommend COPD as risk factor for osteoporosis screening. 11, 12 Fracture Risk Assessment (FRAX), one of the most popular assessment tools, does not include COPD as a risk factor in its assessment algorithm but has current smoking and glucocorticoid use as factors contributing to higher risk score. 13 QFracture, another commonly used risk assessment tool, includes COPD as a risk factor for major osteoporotic fracture. 14 Both these risk scores do not take into account factors like dose and repeated exposure to oral steroid and high dose inhaled glucocorticoids, which are commonly used for exacerbation in patients with uncontrolled COPD, and can independently predispose them to increased risk of fracture and added morbidity. There is also paucity of data on bone health in patients with COPD in developing countries like India. Hence, we have designed this study to estimate the prevalence of osteoporosis and other metabolic bone health indices in this cohort of patients. ## SUBJECTS AND METHODS This was a case control study conducted between September 1, 2012, and June 30, 2014. The study was approved by the institutional review board. The cases were consecutive male patients with COPD between 50 and 70 years of age attending the Respiratory Medicine outpatient services were screened, and those with known COPD, or newly diagnosed to have COPD as per the GOLD criteria, were enrolled into the study. 1 Subjects of this age and gender were selected to homogenize the study population and to minimize the influence of hormonal changes affecting bone health seen in the extreme of ages, particularly in women. Subjects with hyperthyroidism, hyperparathyroidism, Cushing's syndrome or any other severe systemic illness, immobilization, and those who were already on calcium and vitamin D were excluded from the study. The control population was derived from the cluster random sampling of 242 individuals from the community who were apparently healthy without COPD and were of similar age and gender to the cases. 15 They were also from the same region, and this was done to avoid the confounding effect of ethnicity influencing bone health. The prevalence of osteoporosis in the control population at any site was $20\%$ ($15\%$ at the lumbar spine and $10\%$ at the femoral neck), and further details of this study can be found elsewhere. 15 Written informed consent was obtained from all subjects. Data were obtained regarding age, symptoms, exacerbation triggers of COPD, and the severity of the disease. A detailed medication history, including oral and inhaled glucocorticoid frequency, dose, and duration were documented along with the presence of pre‐existing comorbidities (eg, diabetes, hypertension, and dyslipidemia). The doses of inhaled glucocorticoids were calculated for the budesonide equivalent dose. Patients were then categorized into high dose and less than high dose based on the cumulative daily inhaled glucocorticoids dose. The high dose category patient received a cumulative dose of budesonide > 800 μg/day and the latter received less than 800 μg/day. The cumulative dose of oral glucocorticoids was calculated for the prednisolone equivalent dose. A validated semiquantitative food frequency questionnaire (FFQ) was used to calculate the dietary calcium intake by 24‐hour dietary recall method. 16 Sunlight exposure was calculated from the duration for which the patient's body surface area was directly exposed to the sunlight such that when the shadow formed is smaller than the real image. 17 All subjects underwent spirometry using the Jaeger spirometer and a 6‐minute walk test to make assessments as per the American Thoracic Society Guidelines. 18 The GOLD criteria were used to categorize patients into the various disease stages. 1 The body mass index, airflow obstruction, dyspnea, and exercise (BODE) index, which is a composite marker of disease severity that takes into consideration of the systemic nature of the disease, was calculated for all patients. 19 The mortality risk according to the BODE index is as follows: a score greater than 7 is associated with a $30\%$ 2‐year mortality, a score of 5–7 is associated with a $15\%$ 2‐year mortality and < 5 is associated with $10\%$ 2‐year mortality, respectively. 20 Assessment of bone mineral density (BMD) was performed using the Hologic DXA Discovery QDR 4500 at lumbar spine, femoral neck, and distal forearm by the same technician. The reference standard consisted of healthy young White subjects used by the manufacturer's database with precision of $2\%$ and the WHO criteria for osteoporosis based on T‐score were used to categorize the patients. 21 Early morning fasting blood samples were collected in order to assess the following metabolic bone and other biochemical parameters: serum calcium (normal [N]: 8.3–10.4 mg/dL), phosphorus (N: 2.5–4.6 mg/dL), albumin (N: 3.5–5.0 g/dL), alkaline phosphatase (ALP; N: 40–125 U/L), creatinine (N: 0.5–1.4 mg/dL), 25‐hydroxyvitamin D3 (25[OH]D; N: 30–70 ng/mL), intact parathyroid hormone (iPTH; N: 8–50 pg/mL) and C‐reactive protein (CRP; N: < 6 mg/L), total testosterone (N: 300–1030 ng/dL), and cortisol (N: 7–25 μg/dL). The biochemical variables, such as calcium, phosphorus, creatinine, albumin, and ALP were measured in a fully automated computerized microanalyzer (Hitachi model 911; Boehringer Mannheim). The intra‐assay and inter‐assay coefficients of variation of the variables being studied from these machines were $1\%$–$5\%$. Intact PTH, testosterone, and 25(OH)vitamin D were measured by a chemiluminescence immunoassay using an Immulite analyzer 2000. Vitamin D level was defined as sufficient for 25 (OH) D levels more than 30 ng/mL and deficient for levels < 20 ng/mL. CRP was estimated by immunonephelometry (BN ProSpec; Dade Behring) according to the manufacturer protocol using the CardioPhase highly sensitive CRP reagents. Hypogonadism was defined as 8 am total serum testosterone < 300 ng/dL. ## SAMPLE SIZE CALCULATION AND STATISTICAL ANALYSIS The sample size was calculated using prevalence data from a previously published study from India. 14 A sample size of 64 subjects was required to study the prevalence of low bone density (osteoporosis and osteopenia) assuming a prevalence of $80\%$ based on the previous Indian study using the equation 4 pq/d2 with a precision of $10\%$. The continuous variables were described using means and standard deviations or median and interquartile range (IQR) depending on normality. All categorical variables were summarized by using frequencies and percentages. Association for continuous variables with low bone density was done using Independent t test and for categorical associations chi‐square test was used. The T‐scores of each region were considered as continuous outcome as the larger percent of the cohort has either osteopenia or osteoporosis. Linear regression model was used to determine significant predictors. Univariate model was used to define the individual effect of each predictor. Multivariate model was constructed adjusting for variables with entry criteria of P value < 0.20. The effect sizes were presented with beta (and $95\%$ confidence interval [CI]). For all analyses, the significance level was determined for $P \leq 0.05.$ The results of this study were compared with a historical cohort of previously published subjects from the same ethnicity without COPD. 15 All statistical analyses were done using STATA/IC version 16.0. ## RESULTS This study included 67 male subjects diagnosed with COPD based on the GOLD criteria. The mean (±SD) age group of the study population was 60 (±6) years, and the mean duration of COPD was 48 months (Table 1). **TABLE 1** | Unnamed: 0 | Overall (n = 67) | Normal (n = 6) | Osteopenia (n = 33) | Osteoporosis (n = 28) | P valued | | --- | --- | --- | --- | --- | --- | | Age (y) a | 60.2 ± 6.9 | 59.5 ± 6.8 | 59.2 ± 7.3 | 61.6 ± 6.4 | 0.176 | | Current smokers c | 7 (10) | 0 (0) | 2 (6.1) | 5 (17.9) | 0.093 | | No. of pack years b | 30 (20, 46.5) | 30 (28, 40) | 24 (15, 44.5) | 36 (25, 50) | 0.176 | | Duration of COPD in months b | 48 (24, 72) | 18 (12, 39) | 60 (36, 84) | 54 (24, 72) | 0.673 | | 6 MWD (meters) a | 348 ± 92.1 | 318.9 ± 84.3 | 370.2 ± 97.1 | 328 (84) | 0.134 | | FEV1 a | 42.2 ± 18.6 | 51.9 ± 21.8 | 44.3 ± 19.6 | 37.5 ± 16 | 0.085 | | FVC a | 61.3 ± 17.2 | 71.8 ± 14.9 | 61 ± 18.8 | 59.5 ± 15.4 | 0.464 | | FEV1/FVC a | 67.6 ± 17.5 | 70.3 ± 20.9 | 72.2 ± 18.8 | 61.7 ± 13.4 | 0.017 | | Oral steroid dose b | 0 (0, 0) | 0 (0, 0) | 0 (0, 0) | 0 (0, 20) | 0.287 | | Oral steroid duration in the last 1 y b | 0 (0, 0) | 0 (0, 0) | 0 (0, 0) | 0 (0, 5) | 0.282 | | Dietary calcium intake b | 1156.3 ± 264.2 | 1048.3 ± 231.4 | 1157.9 ± 291.6 | 1177.5 ± 238.5 | 0.581 | The majority of the patients were distributed equally in stages II, III, and IV, there was only one patient with stage I disease. The frequency of patients in three BODE categories‐ < 5, 5–7 and more than 7 were 8, 7 and 52 patients, respectively. Nine of the study participants received high dose inhaled glucocorticoids of which one had osteoporosis and the rest had osteopenia. Seven patients received oral glucocorticoids in the last 2 years. As expected, these patients were in stages III and IV disease category and had a high BODE index score. The prevalence of vitamin D deficiency was $52\%$ (N: $\frac{35}{67}$). Biochemical hypogonadism was seen in $31\%$ (N: $\frac{21}{67}$). Duration of sunlight exposure was equal in all the groups. The prevalence of osteoporosis at any one site in this study was found to be $61\%$ ($\frac{41}{67}$). The prevalence of osteoporosis at the lumbar spine and femoral neck were almost equal with $24\%$ ($\frac{16}{67}$) at the lumbar spine and $25\%$ ($\frac{17}{67}$) at the femoral neck. The prevalence of osteopenia at the lumbar spine and femoral neck was found to be $47\%$ ($\frac{31}{67}$) and $53\%$ ($\frac{36}{67}$), respectively. There was an increased prevalence of osteoporosis of $33\%$ ($\frac{22}{67}$) and osteopenia $33\%$ ($\frac{22}{67}$) at the distal forearm compared to the other sites (Figure 1). **FIGURE 1:** *Prevalence of osteoporosis between cases and controls across different sites.* In the univariate regression model, at least one site of lower T‐score for osteoporosis in male patients with COPD were significantly associated with age, body mass index (BMI), smoking status, forced expiratory volume in 1 second (FEV1) and FEV1/FVC (Table 2). BMI remained significantly associated with lower T‐score even in the multivariate analysis (Table 3). The mean BMD in the present study was compared with age and gender‐matched controls without COPD or other chronic disease affecting bone health (Table 4). 15 The mean BMD at the femoral neck for patients with COPD (0.692 kg/m2) was significantly lower when compared with healthy subjects of similar age group, ethnicity, and gender (0.761 kg/m2, $P \leq 0.001$). A similar finding was also found in the lumbar spine region (mean BMD patient with COPD: 0.906 kg/m2 vs. normal subject 0.943 kg/m2, $$P \leq 0.024$$). **TABLE 4** | Parameters | COPD (n = 67) Mean (SD) | Non‐COPD 15 (n = 252) Mean (SD) | Unpaired t test P value | | --- | --- | --- | --- | | Serum calcium (mg/dL) | 9.32 (0.56) | 8.82 (0.43) | < 0.001 | | Serum PO4 (mg/dL) | 3.65 (0.75) | 3.9 (0.5) | 0.001 | | Serum iPTH (pg/mL) | 57.11 (28.59) | 44.5 (25.6) | < 0.001 | | Serum alkaline PO4 (U/L) | 83.84 (28.42) | 73.5 (21.4) | 0.001 | | Serum 25 OH vitamin D (ng/mL) | 25.25 (16.50) | 20.4 (8.3) | < 0.001 | | Serum testosterone (ng/dL) | 381.15 (173.71) | 620 (124) | < 0.001 | | ESR (mm/h) | 17.37 (12.93) | – | | | CRP (mg/L) | 11.04 (13.89) | – | | | Bone mineral density | Bone mineral density | Bone mineral density | Bone mineral density | | Femoral neck (g/cm2) | 0.692 (0.130) | 0.761 (0.124) | < 0.001 | | Lumbar spine (g/cm2) | 0.906 (0.145) | 0.943 (0.111) | 0.024 | | Distal forearm (g/cm2) | 0.588 (0.089) | – | | ## DISCUSSION In the current study, the prevalence of osteoporosis in men with COPD was $61\%$, and hypovitaminosis D was seen in $52\%$ of the study subjects. These results along with the previously published data confirms that people with COPD have weaker bone mass, and prevalence of osteoporosis is nearly doubled when compared with healthy men in the same community (Table 4). 15, 22, 23, 24 *The osteoporosis* prevalence from our study matches data from two other previously published reports from India. The first was published by Bhattacharya et al who measured BMD using calcaneal ultrasound. 22 In the second study by Hattiholi et al, the prevalence of osteoporosis and osteopenia were $66.7\%$ and $19.6\%$, respectively. 23 However, other parameters relating to adverse bone health were not reported in both these studies. The prevalence of osteoporosis reported in these Indian studies were more when compared with Western studies. 25, 26 In the multicentric TOwards A Revolution of COPD Health Study (TORCH trial), the prevalence of osteoporosis and osteopenia were $18\%$ and $41\%$, respectively. 27 The reason for an increased prevalence of osteoporosis in our study and other studies reported from India may be due to an increased community prevalence of osteoporosis and vitamin D deficiency, an advanced stage of the disease, and a higher dose of glucocorticoids used for treatment. 28 The increased risk for osteoporosis in patients with COPD is due to the systemic nature of the disease, glucocorticoid intake, change in body composition and weight, decreased activity, reduced exercise reserve, and reduced sunlight exposure due to dyspnea associated with mobility during advanced stages of the disease. What causes this systemic dysfunction is not clearly understood, but there are some hypotheses that are postulated and tested. The two important ones are a systemic spillover theory and a compartment model. In the systemic spillover hypothesis, it is assumed that there is a spillover of the cytokines and inflammatory mediators due to chronic inflammation in the lungs into the systemic circulation. 29, 30 The compartment model states that there are two or more compartments where the disease process is ongoing simultaneously. 31, 32 The distant organ or systems affected, as mentioned earlier, were the cardiovascular system, adipose tissue, and bone, and the primary organs are the lungs. The mean BMI of our study population was 23 kg/m2 (2 SD ± 5.06). BMI in our study population is similar to that seen in the other two studies reported from India as compared to the Western study population who have a much higher BMI. 22, 23 In our study, BMI was positively correlated with the BMD. Mechanical bone loading increases the bone strength and remodeling but it also ultimately depends on the fat free mass that contributes to this increased effect. 33 Fat free mass in patients with COPD has been reported to be low and this depends on the severity of disease category with a decrease of $20\%$ in a clinically stable patient with COPD to $41\%$ in severe cases those requiring pulmonary rehabilitation when compared to the age and gender matched general population. 34 Leptin, an adipocyte derived hormone has a biphasic effect on bone modeling and re‐modeling. At low concentration, it promotes proliferation and differentiation of osteoblasts but at high concentration it inhibits the bone formation both through central and peripheral effects. 35 Moreover, this effect of leptin is more pronounced in obese women with COPD, who have high circulating leptin levels. 36 Hence, body weight and BMI have a complicated relationship with bone health. The other parameters that were significant in the regression modeling were testosterone deficiency and FEV1 level. It is well‐established that testosterone has positive effects on bone formation by its direct action and indirect action through aromatization to estrogen. 37 Testosterone exerts its direct effects by binding to androgen receptors expressed on the pre‐osteoblast and helps its maturation whereas estrogen influences bone formation and inhibits resorption through its action on the estrogen receptor. 21 FEV1 had a positive effect on the bone health and is likely related to the systemic state of the patient, as a higher FEV1 indicates better lung function. Hence, this will make the individual mobilize better for proper bone loading, sunlight exposure, and lower steroid requirements for disease control. The inflammatory markers, erythrocyte segmentation rate (ESR) and CRP were elevated in our study population. Suppression of bone formation and an increase in osteoclastogenesis in chronic inflammatory disease has been shown to induce proteins, such as Dickopf 1 and sclerostin. 38 By inhibition of the Wnt pathway, these proteins along with several other cytokines, such as IL‐15, interferon gamma, IL‐17 MCP‐4 (monocyte chemoattractant protein), and TNF‐α blunt the bone formation there by leading to osteoporosis and its sequelae. 39, 40 Regular use of oral glucocorticoids significantly increases the risk of osteoporosis. 41 *This is* due to the uncoupling of bone formation as well as due to the direct toxic effect of steroids on the osteoblast. High dose inhaled glucocorticoids are known to have systemic effects with adverse bone effects and dose‐related adrenal suppression. 42 Our study had only nine participants ($14\%$ percent) on high dose inhaled glucocorticoid and this did not achieve statistical significance with adverse bone health, potentially due to the reduced sample size. But this finding is similar to TORCH trial, which did not show an increase in bone loss in people taking inhaled glucocorticoids when compared with those on placebo. 27 Although the study population resides in and around Vellore (Vellore, 12 degrees55′N, longitude 79 degrees11′E) where there is abundant sunlight throughout the year, only $13\%$ had adequate exposure to sunshine. Sunlight is an abundant source for vitamin D, which in turn is an intermediate factor contributing to the bone health. 43 Exposure to the sunlight should be at the time when the vitamin D synthesis is at its peak, and this usually happen at early noon when the ultraviolet B component of the sunlight is at its maximum. The surrogate marker for this in practical sense would be when the length of the shadow formed is less than the individual's height and the recommended duration of exposure is for at least 30 minutes. 28 Because of restriction to outdoor activity, due to dyspnea, and in the late stages due to the requirement of oxygen therapy, this can be limited in patients with COPD. The dressing pattern among Indian men exposes only face and feet to sunlight when involved in outdoor activities. Hence, only $23\%$ of our study population had sufficient 25(OH)D levels, which is less than community prevalence in a healthy individual. To our knowledge, we do not know any other study from India which has reported the prevalence of vitamin D deficiency in patients with COPD. Comparing our prevalence data with Western studies would be inappropriate, as the vitamin D synthesis due to sunlight exposure depends on the solar zenith angle, minimal erythema dose, duration of sunlight exposure, and dressing pattern. 44, 45 The limitation of our study is the small sample size which precludes the possibility of making comparison across different stages of COPD. However, this is the first study from India, to our knowledge, to assess other parameters other than BMD to examine bone health in a male patient with COPD. It may be prudent to conduct similar studies in groups of premenopausal and postmenopausal women with COPD on a separate basis to understand the profile of their bone health. ## CONCLUSION Osteoporosis and an abnormal bone health profile is highly prevalent among patients with COPD. Differences in the patient characteristics and diagnostic tools account for the varied prevalence across studies, in any case, it is much higher than the general population. Higher prevalence of osteoporosis in the past was solely attributed to the increased glucocorticoid exposure but parameters for adverse bone health were seen even in steroid naive patients suggestive of a more complex underlying mechanism. Osteoporosis and osteoporotic fracture related morbidity and mortality will add to the already existing disease burden in those affected by COPD. But these can be prevented with proper screening and intervention, including lifestyle changes (increasing calcium intake in the diet and adequate sunlight exposure), vitamin D, calcium supplementation, and bisphosphonates when needed. This should be included in the comprehensive COPD care plan and modified to suit each individual patients’ needs. ## AUTHOR CONTRIBUTIONS Research and study design: Jeeyavudeen, Hansdek, Thomas, Balamugesh, Gowri, and Paul. Data collection: Jeeyavudeen, Hansdek, Gowri, and Paul. Data analysis: Balamugesh, Gowri, and Paul. Interpretation and conclusion: Jeeyavudeen, Hansdek, Thomas, and Paul. Preparation of manuscript: Jeeyavudeen, Hansdek, and Paul. Review of manuscript: Jeeyavudeen, Hansdek, Thomas, Balamugesh, Gowri, and Paul. Critical revision: Jeeyavudeen, Hansdek, and Paul. Guarantors for the study: Jeeyavudeen. ## FUNDING INFORMATION The protocol was approved by the institutional review board (IRB) of Christian Medical College, Vellore, and the funding was provided by the FLUID grant of the IRB. There was no involvement of the funding source in study design, in the collection, analysis, and interpretation of data, in the writing of the report, and in the decision to submit the paper for publication. ## CONFLICT OF INTEREST The authors report no conflicts of interest for this study. ## ETHICAL APPROVAL This study was approved by Office of Research, Institutional Review Board, Christian Medical College, Vellore, India IRB Min No: 7996 [Dated] February 12, 2013.
# Circadian dysfunction and Alzheimer's disease – An updated review ## Abstract Alzheimer's disease (AD) is considered to be the most typical form of dementia that provokes irreversible cognitive impairment. Along with cognitive impairment, circadian rhythm dysfunction is a fundamental factor in aggravating AD. A link among circadian rhythms, sleep, and AD has been well‐documented. The etiopathogenesis of circadian system disruptions and AD serves some general characteristics that also open up the possibility of viewing them as a mutually reliant path. In this review, we have focused on different factors that are related to circadian rhythm dysfunction. The various pathogenic factors, such as amyloid‐beta, neurofibrillary tangles, oxidative stress, neuroinflammation, and circadian rhythm dysfunction may all contribute to AD. In this review, we also tried to focus on melatonin which is produced from the pineal gland and can be used to treat circadian dysfunction in AD. Aside from amyloid beta, tau pathology may have a notable influence on sleep. Conclusively, the center of this review is primarily based on the principal mechanistic complexities associated with circadian rhythm disruption, sleep deprivation, and AD, and it also emphasizes the potential therapeutic strategies to treat and prevent the progression of AD. Amyloid beta plaques and accumulation of tangles are two major pathological hallmarks of Alzheimer's disease. Due to cholinergic disturbance, HPA axis dysfunction, neuronal loss, and retinal ganglion loss there is disturbance in circadian rhythm which leads to Alzheimer's disease dysfunction. ## INTRODUCTION Alzheimer's disease (AD) is the most common type of neurodegenerative disorder, which largely causes dementia and mainly affects older aged people. By the year 2050, around 12 million cases will be reported. 1, 2 In AD, accumulation of amyloid beta and hyperphosphorylated tau are microscopic pathologies, whereas reduction in hippocampal volume, frontotemporal, and associated cortical atrophy with ventricular enlargement are macroscopic findings. 3, 4, 5 To rule out AD, multiple biomarkers are available, like cerebrospinal fluid (CSF) molecules (for example, amyloid and tau), and to see atrophy in the brain, various neuroimaging techniques, such as computed tomography, magnetic resonance imaging, or positron emission tomography (PET). Current pharmacological treatments include donepezil, galantamine, and rivastigmine, which work as cholinesterase inhibitors. Memantine works as an N‐methyl D‐aspartate antagonist and Abun approved this in 2021. 6, 7 Most current studies focus on the molecular aspect of AD, which mainly focuses on neuroinflammation, mitochondrial dysfunction, and glial cell activation. 8 Currently, researchers focus on circadian rhythms, which help the researchers to understand AD pathophysiology in a relatively comprehensive and satisfactory way and also help to address or develop therapeutic targets of AD. Sleep disruptions and circadian disorders are quite common; around $45\%$ of patients face problems with sleep. 9, 10 These symptoms are present for several patients with AD even before the final medical diagnosis of AD. Based on multiple studies, it is seen that sleep disturbances can lead to neurodegeneration and even cognitive impairment. In the future, it can be utilized as a biomarker for neurodegeneration. In one study, it is seen that older women with diminished and irregular circadian rhythms have a higher risk of developing one of the types of impairments of AD, such as mild cognitive impairment and dementia. Various studies suggest that $25\%$–$66\%$ of patients with AD face sleep disruption, which can be easily noticeable. 11, 12, 13, 14, 15, 16, 17 Melatonin (N‐acetyl 5–methoxytryptamine) is a hormone regulated by the circadian rhythms, and it plays a vital role in the neurodegenerative event of AD. 18 The primary source of melatonin is the brain's pineal gland, but other organs like the retina, bone marrow, kidney, pancreas, skin, and glial cells are also involved. Melatonin is a multifunctional hormone that regulates circadian rhythm and shows anti‐inflammatory, cytoprotective, and anti‐oxidant properties. The circadian clock regulates melatonin and during a study in rat and mice models, melatonin shows the highest plasma melatonin level at midnight. 19, 20 Melatonin production decreases with aging which can be considered a critical factor for the onset of AD. When impairment or disruption is seen in the suprachiasmatic nucleus (SCN), melatonin levels are reduced, resulting in circadian rhythm disruption. 21, 22, 23 Even reduction in CSF is linked with melatonin, and, finally, melatonin progresses AD by causing oxidative damage in the AD brain. Patients with AD have a low level of melatonin as compared with healthy patients. Melatonin can be a promising therapeutic approach to inhibit AD progression as it has free radical scavenging properties as well as anti‐amyloidogenic properties. Melatonin also inhibits the secretion process of soluble amyloid precursor protein (APP) in various cell lines through APP maturation. Melatonin administration attenuates amyloid beta generation and deposition in vitro and in vivo models. 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34 A sundowning phenomenon enhances mental health decline, confusion, and agitation in patients with AD, whereas melatonin reduces the symptoms of sundowning and enhances cognition. In this review, we discuss the association of circadian dysfunction with AD pathology as well as a few pharmacological and non‐pharmacological interventions for sleep disruption in patients with AD. 35, 36, 37, 38, 39 ## CIRCADIAN BIOLOGICAL CLOCK MECHANISM IN THE BRAIN A core gene of the circadian clock, the Period (PER) gene, was the first clock gene to be discovered by Jeffrey C. Hall and Michael Rosbach. The (PER protein is produced mainly at night and broken down during the day, and this whole cycle is regulated with the help of a negative feedback loop where PER protein blocks its production. 40, 41 This protein is encoded by the PER gene. Recently, a new gene which is known as the double‐time (DBT) gene, has been discovered to encode DBT protein. The DBT protein averts the PER accumulation, proving that rhythm can be flagged according to the 24 hour biological clock. Circadian rhythm regulation is observed both at the central and peripheral levels. In 2017, Jeffrey C. Hall, Michael Rosbash, and Michael Wyong uncovered the molecular mechanisms regulating circadian rhythm and received the Nobel Prize in physiology or medicine. This mechanism demonstrates that mammals have a central pacemaker called the SCN in the hypothalamus. When the retina gets photic input, it transmits information to the SCN. This central clock regulates the circadian rhythm throughout all body functions through the peripheral autonomic nervous system and hormonal factors. The circadian system is a web of interlinked feedback loops and oscillators across all organisms. The Period (PER 1–3), Cryptochrome (CRY1 and 2), and Reverb (NR1D1 and NR1D2) genes are negative feedback regulators which suppress the positive limb. The SCN helps in the synchronization of cellular oscillators across organs in humans. The retina sends light and dark signals to the SCN, which further regulates it. It synchronizes the core clock oscillations in neurons, ultimately translated into oscillatory synaptic output, which transfers the signals to the multiple nuclei in the hypothalamus. All these patterns in neuronal activity, and behavioral and physiological arrhythmicity can be lost post ablation of the SCN. 40, 41, 42, 43, 44, 45 The circadian clock system is shown in Figure 1, and relationship between circadian rhythm and AD is shown in Figures 2 and 3. **FIGURE 1:** *Twenty‐four hour biological clock in the human brain and its circadian disruption* **FIGURE 2:** *Crosstalk between sleep deprivation and Alzheimer's disease. Aβ, amyloid beta* **FIGURE 3:** *Linkage between circadian rhythm and Alzheimer's disease. Aβ, amyloid beta; EEG, electroencephalogram; nREM, non‐rapid eye movement; SCN, suprachiasmatic nucleus* ## CHOLINERGIC DISTURBANCES AND CIRCADIAN DYSFUNCTION IN AD PATHOLOGY Neurodegeneration can also be seen in the basal cholinergic forebrain. Disruption in circadian rhythm can also occur due to cells of the nucleus basalis magnocellularis, which projects to the SCN. Enrhardth reported that in rats, there are increased phase delays in response to lights when the cholinergic basal forebrain projects to the SCN. This study suggests a relationship between AD neurodegeneration and the circadian clock's signal entrainment ability. 46, 47, 48 ## NEURONAL LOSS IN THE SCN AND CIRCADIAN DYSFUNCTION IN AD During the autopsy of patients with AD, it was seen that there is a neuronal loss in the SCN, which is related to loss of amplitude in the circadian rest‐activity pattern. Apart from MT1, melatonin receptor expression was disturbed, which resulted in the SCN responding to the phase resetting signal and generating daily rhythms. 49, 50 ## RETINAL GANGOLIAN CELL LOSS AND CIRCADIAN DYSFUNCTION IN AD A particular type of subset of retinal ganglion cells (RGCs) known as Melanopsin expressing RGCs (mRGCs) was discovered in 2002. These cells are photoreceptors inside the retina, which help in the photoentrainment of circadian rhythms by projecting light to the SCN. Melanopsin expressing mRGCs constitutes $1\%$–$2\%$ of all RGCs, but they can direct signals to the SCN through the retinal hypothalamic tract. In patients with AD, mRGC loss can be seen, which can cause amyloid beta deposition, and lead to impairment of the entire RGCs even though there is a deposition of amyloid beta in mRGCs. The Toronto study shows interesting results involving retinal amyloid beta deposition in patients with AD. These findings will help better understand the pathology of retinal amyloid beta deposition in patients with AD. Amyloid beta deposition in mRGCs can lead to instability in transmitting the circadian signal of light from the retina to the SCN. 51, 52, 53, 54, 55 ## CIRCADIAN GENE DELETION AND CIRCADIAN DYSFUNCTION IN AD Deletion mutations in the circadian clock gene cause neuronal injury. Core circadian clock disruption is directly linked to neurodegeneration in AD. BMAL1 is considered to be one of the core genes of the master clock, and a study conducted in mice has shown the deletion of BMAL1 in the hippocampus and cortex. In mice, we observe normal behavioral rhythms and normal sleep wake cycles assessed by wheel running actigraphy and electroencephalogram, respectively, in the presence of severe cortical astrogliosis, synaptic degeneration, and oxidative brain region damage in specific BMAL1 knockout mice. These mice are closely related to transcription multiple redox defenses linked with circadian impairment. Low levels of BMAL1 in the brain also lead to neurodegeneration caused by mitochondrial toxin B nitropropionic acid. The data suggest that decreased BMAL mediated transcriptional exacerbate neurodegeneration in AD. Clock‐gene regulation and better insight into the linkage of clock genes and neurodegeneration require further research and a deeper understanding to examine such regulations. 56, 57, 58, 59 The effect of different clock genes on animal models is shown in Table 1. **TABLE 1** | Subject no. | Different models | Effect of clock genes on different circadian models | References | | --- | --- | --- | --- | | 1.0 | APP‐PS1 mouse model | Casein kinase 1 isoforms ε and δ with inhibitor PF‐670462 reduce amyloid and plaque size as well reduce Aβ signal in the prefrontal cortex and hippocampus, which proves chronotherapy as a promising tool to improve behavior in mice | 103 | | 2.0 | Two‐month‐old female APPSwe/PS1dE9 mice | Female APPSwe/PS1dE9 mice show abnormal locomotor activity in which clock gene expression of clock genes Per 1, Per 2, Cry 1, and Cry 2 was increased during night time compared to day type in wild type control mice as Cry 1 and Cry2 expression was low in APPSwe /PS1dE9 mice. This study proves APPSwe /PS1dE9 mice as a most promising AD model to test therapeutic agents related to behavioral and circadian rhythm changes. | 104 | | 3.0 | Cultured fibroblasts and brain samples | BMAL1 is a positive regulator of the circadian clock, and in cultured fibroblasts, DNA methylation regulates BMAL1 rhythms which is linked to circadian alteration in AD | 105 | | 4.0 | Tg 4510 mice | In Tg4510 mice, it is seen that there is tauopathy in SCN and even disruption in PER2 and BMAL1 in the hypothalamus of Tg4510 mice. This study proves that tauopathy can lead to normal circadian clock function disruption. | 106 | | 5.0 | AD brain | In this study, the glial fibrillary acid protein in human astrocytes is suppressed as there is an elevation in CLOCK and BMAL, which cause functional impairment by inhibition of aerobic glycolysis in AD | 107 | | 6.0 | 5XFAD mouse model | Rev‐erbα, a circadian repressor, decreases amyloid plaque number and size in the 5XFAD AD mouse model. Even Rev‐erbα show a neuroinflammatory effect, which proves Rev‐erbα as a novel therapeutic target. | 108 | | 7.0 | APP/PS1dE9 mice | In APP/PS1dE9 mice, there is an alteration of rhythmic expression patterns of BACE 1 and ApoE in the hippocampus, which is activated by E4BP4 and BMAL1, respectively. So, finally, study suggests that hippocampal clock and circadian oscillation of AD risk gene are regulated by orexin signaling. | 109 | ## MICROGLIA, ASTROCYTE, AND CIRCADIAN DYSFUNCTION IN AD Activation of microglia and astrocyte leads to neuroinflammation, which ultimately causes neurodegeneration. Astrocyte activation can be observed to model clock gene deletion in the in vitro model. Even the inflammatory response of microglia leads to variation in the functional circadian clock. Rev‐*Erb alpha* regulates pro‐inflammatory cytokine production in macrophages. Finally, inflammation shows the effect of the circadian clock as both Rev‐*Erb alpha* suppressing BMAL1 levels in macrophages in response to lipopolysaccharides. Therefore, the BMAL1 expression in the surrounding glia and neurons can be suppressed by cortex inflammation causing impairment of BMAL1‐associated genes, ultimately leading to neurodegeneration. 56, 60 ## OXIDATIVE STRESS AND CIRCADIAN DYSFUNCTION IN AD PATHOLOGY Numerous studies support the presence of augmented oxidative stress in AD. Less concentration of glutathione and catalase with higher consumption of oxygen ($20\%$–$30\%$) and a higher amount of polyunsaturated fatty acids make the brain a highly vulnerable target for lipid peroxidation. 61, 62, 63 Lipids peroxidation interrupts cellular functions, followed by neuronal membrane destruction, and the production of highly reactive electrophilic aldehydes, including acrolein, malondialdehyde, and 4 hydroxy 2‐nomial (elevated in AD brains). 64, 65, 66 Oxidative stress also damages nucleic acid and proteins. The role of oxidative stress etiology in AD pathogenesis is still unknown. In 1985, the activity of antioxidants, like superoxide dismutase and glutathione peroxidase with oxidative damage in the day‐night cycle in the rat cerebral cortex, whereas in humans, anti‐oxidants and circadian rhythmicity protect cells from oxidative damage. 67, 68, 69, 70 The levels of glutathione reductase, glutathione peroxidase, superoxide dismutase, catalase, uric acid, and peroxiredoxin are high in the morning. In contrast, ascorbic melatonin and plasma level are high in the evening or night. This proves that oxidative stress leads to oxidative damage with the progression of AD, which is ultimately regulated by circadian dysregulation. 71 ## ERK/MARK AND CIRCADIAN DYSFUNCTION IN AD Cognitive impairment is the first symptom observed in AD. Impairment, such as memory, is enhanced by short‐term stress and impaired by long‐term stress, and the number of dendritic synapses decreases due to high cortisol levels during chronic stress. 72 The pathway primarily revolves around memory consolidation, and the level of phosphor‐ERK CAMP, phosphor CREB, and activity of PKA and MEK are associated with a circadian rhythm. Moreover, the SCN regulates the hippocampus' Camp/PKA/ERK/CREB signaling pathway. 73, 74, 75 The CREB/ERK/PKA/CAMP signaling pathway increases during rapid eye movement sleep. They are even ablating the BMAL1 gene results in reduced Per1 and PERK levels. A study reported that ERK appears overactivated and memory is improved by pharmacological inhibition of ERK in an AD mouse model, whereas memory impairment is seen due to reduction of pCREB level downstream of the ERK pathway. 76 ERK signaling pathway is disrupted in AD due to amyloid beta 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42 bind injury. Finally, ERK/MAPK signaling pathway is a common pathway that causes stress as circadian rhythm even plays a role in memory consolidation. 77 ## HPA AXIS AND CIRCADIAN DYSFUNCTION IN AD HPA axis activation promotes AD pathogenesis. Even reducing cortisol levels by taking dexamethasone does not show positive results in patients with AD; instead of cortisol levels, few approaches to decrease and modulate HPA axis activity can be a promising avenue for treating AD. Even amyloid beta promotes HPA axis activity and increases corticosterone. The HPA axis is one of the common pathways by which SCRD and stress increase amyloid beta production, leading to AD. 78 ## HIPPOCAMPAL VOLUME AND CIRCADIAN DYSFUNCTION IN AD Reduced hippocampal volume was observed in AD and different neurodegenerative and psychiatric disorders. It is hypothesized that prolonged sleep restriction or sleep disruption can cause a decrease in hippocampal neuronal cell proliferation and neuronal cell survival. Few preliminary clinical trials and observational studies suggest that regular physical exercise, cognitive stimulation, and general medical conditions can reduce hippocampal volume or atrophy, reverse hippocampal atrophy, or even expand the hippocampal size. 79, 80 ## GLYMPHATIC SYSTEM AND CIRCADIAN DYSFUNCTION IN AD The glymphatic system was first described in 2012, which consists of intestinal fluid that regulates brain amyloid clearance by the perivascular space surrounding blood vessels. Glymphatic system dysfunction also plays a vital role in the severity of AD. To date, no clinically approved system has been developed to evaluate the functionality of the glymphatic system in humans. Recently, the glymphatic system has even played a role in glaucoma pathogenesis, characterized by progressive degeneration of RGCs and amyloid beta accumulation. This activity is higher during sleep and low during wakefulness. Even body posture during sleep, especially lateral body position, may increase the rat's glymphatic transport. Further studies need to be done to see the relation of the glymphatic system with patients with AD. 11, 81, 82 ## PROTEOSTATIS AND CIRCADIAN DYSFUNCTION IN AD Amyloid beta and tau are specific protein hallmarks seen in AD. Heat shock factor 1 is a type of factor in which deletion alters circulation clock oscillation. Proteasomal degeneration of proteins display oscillations in circadian patterns and expected circadian clock timing requires an understanding of the proteasome function. It is still unknown how the circadian clock controls rhythmic protein degradation in the brain. 83 ## VASCULAR AND CIRCADIAN DYSFUNCTION IN AD Microvascular change is considered an essential factor in the development of AD. Cerebral vascular perfusion is also under the control of the circadian system. According to PET scans and simple‐photon emission computed tomography, people with moderate cognitive impairment and an increased risk of developing AD exhibit hypometabolism and cerebral hypoperfusion. Antihypertensive treatment has also been shown to reduce the risk of AD. Brain microvascular changes are critical to AD development, both pathologically and clinically. The circadian system regulates cerebral vascular circulation as well. 84, 85, 86 Conroy et al investigated the daily regularity of cerebral blood flow velocity (CBFV) across 30 hours of continuous awake time. The findings of this study suggested that human CBFV probably follows an endogenous circadian rhythm, which will be investigated further in the context of cerebrovascular/cardiovascular events and cognitive function deterioration. 87, 88, 89 Laser‐Doppler flowmetry revealed similar results in rats. The cerebral blood flow has a diurnal periodicity independent of locomotor activity and blood pressure changes. The effect of the circadian rhythm on brain metabolism and perfusion should be carefully considered in future studies on the role of vascular function in AD etiopathogenesis. 90, 91, 92 ## METABOLIC CHANGES AND CIRCADIAN DYSFUNCTION IN AD Circadian/sleep disruption may be mediated by metabolic changes in neurodegenerative disorders, particularly AD. Insulin resistance has been linked to an increased risk of AD in clinical studies, and childhood obesity can also cause cognitive impairment later in life apart from diabetes. Apolipoprotein E (APOE) is a key regulator of lipid metabolism found primarily in brain astrocytes. The APOE 4 allele can cause mitochondrial dysfunction, leading to insulin resistance and metabolic defects as a major risk factor for AD. 93, 94, 95, 96, 97, 98 A recent study suggests that peripheral metabolic dysfunction plays a role in the development of AD‐related neuropathology. The clock regulates the majority of metabolic activity, and the loss of circadian clocks has been linked to cellular and system‐wide metabolic deficits. Sleep deprivation significantly impacts metabolism, including an increase in insulin resistance markers. Based on these findings, it is enticing to believe that sleep disruption increases the risk of AD by disrupting metabolism. 99, 100, 101, 102 ## MELATONIN AS A PROMISING THERAPEUTIC TARGET FOR AD In AD, melatonin has shown multiple beneficial effects, like prevention of mitochondrial dysfunction, inhibition of amyloid beta toxicity, free radical scavenging, and even circadian dysregulation like sundowning and sleep disturbances. 110 Melatonin even has blood–brain barrier crossing capacity, anti‐oxidant properties, as well as balanced amphiphilicity. Amyloid beta peptides are mainly produced with the help of amyloidogenic beta‐amyloid precursor protein (beta APP). Amyloid beta 42 is the most neurotoxic form of amyloid beta. This beta pleated sheet peptide ultimately forms an aggregation of senile plaques in the brain in the form of amyloid fibrils that disrupts synaptic communications leading to abnormal function of neurons and neuronal death. As melatonin has anti‐oxidant, neuroprotective, and anti‐amyloidogenic properties, it might help in decreasing amyloid beta formation. Melatonin has shown effects on both in vivo and in vitro models. 111, 112, 113, 114, 115 Hyperphosphorylated tau plays a crucial role in dealing with memory and cognitive impairment in AD. Neurodegeneration happens due to tau hyperphosphorylation. This tau phosphorylation and protein kinase A (PKA) overactivation in the isopropanol‐induced rat brain can be attenuated by melatonin. This process is followed in the neuroblastoma SHSY5Y cell line and N2a induced by calyculin A, okadaic acid, and wortmannin. Melatonin shows neuroprotective effects in the degeneration of the hippocampus and enhances cognitive effects. These effects are displayed through regulating GSK3 and CDK5 activities in hippocampal neurons. Melatonin inhibits the expression level of caspase 3, prostate apoptosis response 4 (Par4), and Bcl2 associated BAX, reducing neuronal death. 116, 117, 118, 119, 120, 121 Melatonin has an anti‐oxidant property that reduces oxidative stress. In an experimental study, it was observed that NF‐KB commenced IL‐6 in amyloid beta treated brain slices can be inhibited by melatonin in a concentration‐dependent fashion. Melatonin injection (ie, 5 mg/kg, 0.1 to 10 mg/kg, and 10 mg/kg) in the rat in which melatonin shows anti‐inflammatory effects and reduces neuroinflammation by increasing ATP production, stimulating GPX activities, and even enhances SOD activity. 122 Therefore, this evidence shows the anti‐neuroinflammatory effects of melatonin on AD. ## RELATION AMONG EXERCISE, CIRCADIAN RHYTHM, AND AD Various animal models show exercise chronobiotic properties. It is difficult to identify whether exercise has chronobiotic properties in humans because it is quite hard to differentiate the range of effects shown by exercise from multiple other factors, like food, social influences, and light. 123 Non‐photic stimuli, on the other hand, appear to be capable of synchronizing circadian rhythms in people who are blind who lack sensitivity to light, and this helps them entrain to routine schedules without utilizing exogenous melatonin. A recent study related to circadian rhythms and AD has shown that when a person exercises just before habitual sleep, it accelerates circadian rhythm and if it is performed during habitual sleep time, it delays circadian rhythms. 124, 125, 126 Exercise also affects the hippocampus, which plays a role in affecting sleep quality. It has also been reported that people who do exercise regularly on a daily basis have better sleep quality as well as less daytime sleepiness when compared to people who are inactive and do not exercise. As a result, it is still possible that exercise has a greater impact on older adults who face difficulty in sleeping. Exercises also enhance the cognitive part and show neural plasticity which is effective in normal aging as well as a treatment for AD. 127, 128, 129, 130, 131, 132 Sleep after exercise has a well‐known effect on cognitive performance. According to the recent study findings, physical activity plays a huge role in diminishing the effects of poor sleep quality on cognitive functioning in older adult women. As a result, more research is needed to understand the mechanisms underlying exercise, sleep, and cognitive function that are linked in older adults. 133, 134, 135, 136, 137, 138 ## CURRENT THERAPIES AND FUTURE IMPLICATIONS Unfortunately, at present, we have limited pharmacological and non‐pharmacological interventions to manage sleep disturbance in patients with AD. In AD, current behavioral practices include limited caffeine and alcohol intake, regular exercise, and maintaining regular bed and wake times with ample light exposure upon waking. 60 Sufficient daytime light exposure is crucial for patients with AD, mainly for institutionalized patients. Consistent light exposure may bring changes in dysfunctional circadian rhythms in AD and reduce the “sundowning.” Patients with moderate‐to‐severe AD were included in the melatonin and trazodone trials, but only patients with mild‐to‐moderate AD were included in the ramelteon study. Melatonin is considered a part of various clinical manifestations and treatment strategies of AD. 139, 140, 141 *Actigraphy is* used to measure all primary sleep outcomes. Despite the absence of severe side effects, we still have no evidence to suggest that melatonin and trazodone improve sleep quality. More comprehensive clinical trials are desperately needed in this area, particularly those focusing on sleep and cognitive or pathological outcomes in AD. Suvorexant is the first US Food and Drug Administration (FDA)‐approved orexin receptor antagonist which can show effects on amyloid deposition and cognitive end points in early‐stage or presymptomatic AD. Melatonin supplementation on a regular basis may help patients with mild cognitive impairment improve their cognitive performance slightly. However, there appears to be conflicting evidence in mice regarding the effectiveness of melatonin supplementation in reducing amyloid plaques and other AD correlates. Ramelteon has been approved for insomnia, whereas tasimelteon is for the treatment of non‐24 hour sleep–wake disorder in the blind. Until now, these two drugs have not been tested for AD but can be more effective than melatonin. Researchers are trying to develop a drug that can directly target the circadian clock, although they are still in the early stages of development. Small molecules that can alter circadian oscillations' amplitude, frequency, and period have been discovered through high throughput screening. RevErb is a small molecule agonist of the nuclear receptor that can improve metabolic function in mice by directly affecting circadian rhythms. Finally, the right targeting of the circadian clock could be a promising remedial option for treating AD. 33, 34 ## CONCLUSION The pathology of AD (amyloid and tau) has been linked to circadian dysfunctions, and sleep disruptions are very common in patients with Alzheimer's disease that play an important role in disease succession and pathology. Moreover, circadian rhythms communicate with nearly all systems and risk factors involved in the growth and progression of AD. Recognizing early signs of AD, such as changes in sleep patterns and rest‐activity rhythm anomalies, could be useful in identifying early biomarkers for interference to prevent the formation of amyloid‐beta, neurofibrillary tangles and the succession of neurodegeneration. In patients with advanced AD, bright light therapy combined with chronobiotics is effective in treating sundowning characteristics and other cognitive symptoms. Future research into the function of circadian misalignment in the initial stages of AD could lead to new preventive and therapeutic approaches. As a result, circadian rhythms are an excellent target for combating pathology. ## AUTHOR CONTRIBUTIONS Manuscript writing and drawing figures: Faizan Ahmad. Manuscript writing, reviewing, and editing: Punya Sachdeva. Editing: Jasmine Sarkar. Reviewing: Rafiah Izhaar. ## FUNDING INFORMATION No funding was received for this study. ## CONFLICT OF INTEREST The authors declare they have no conflict of interest.
# Factors Associated with Lack of Health Screening among People with Disabilities Using Andersen’s Behavioral Model ## Abstract People with disabilities often have poorer health than the general population, and many do not participate in preventive care. This study aimed to identify the health screening participation rates of such individuals and investigate why they did not receive preventive medical services based on Andersen’s behavioral model, using data from the Survey on Handicapped Persons with Disabilities. The non-participation health screening rate for people with disabilities was $69.1\%$. Many did not in health screening because they showed no symptoms and were considered healthy, in addition to poor transportation service and economic limitations. The binary logistic regression result indicates that younger age, lower level education, and unmarried as predisposing characteristics; non-economic activity as the enabling resources; and no chronic diseases, severe disability grade, and suicidal ideation as need factor variables were the strongest determinants of non-participation health screening. This indicates that health screening of people with disabilities should be promoted while takings into account the large individual differences in socioeconomic status and disability characteristics. It is particularly necessary to prioritize ways to adjust need factors such as chronic disease and mental health management, rather than focusing on uncontrollable predisposing characteristics and enabling resources among barriers to participation in health screening for people with disabilities. ## 1. Introduction Health screening aims to detect and treat diseases at an early stage, thereby reducing the burden of medical expenses and ensuring a healthy life [1]. In Korea, health screening services are divided into national and private health screenings, which differ in terms of screening items and cost burdens. National health screening mainly provides basic and essential health screening items, with little financial burden on individuals. In a private health screening, although various health screening items can be selected according to the individual’s characteristics and preferences, the economic burden is high because it is fully borne by the individual [2]. Korea’s national health screening aims to detect obesity, dyslipidemia, high blood pressure, and diabetes, which are risk factors for cardiovascular and cerebrovascular diseases, early and improve quality of life through treatment or lifestyle improvement. The Korean national health screening is aimed at checking health conditions and preventing and detecting diseases at an early stage. Health screening consists of examination and consultation, physical examination, diagnostic examination, pathology examination, radiological examination, etc., through health screening institutions [3,4]. The most representative health screening in *Korea is* that of the National Health Insurance Service. National health screenings have expanded in subjects and examination items since medical insurance health screening for public servants and teachers began in 1980. The national health screening participation rate in Korea in 2019 was $74.1\%$ [5]. However, the health screening participation rate of people with disabilities was $64.6\%$ [6]. Since the introduction of the national health screening, the increased rate of health screening participation and preparation strategies for health promotion shows its success. However, it was found that the health screening participation rate of people with disabilities was not only low, but this group also suffers from many chronic diseases [6]. Because of this, it is important to determine the cause of this reduced rate and take countermeasures. Although the rate of health screenings for people with disabilities is reported steadily, it is clear that there are deficiencies in implementing national policies and health promotion services for people with disabilities. There are still no general or specialized health screening systems for people with disabilities to detect or prevent secondary diseases at an early stage. Article 7 of the Guarantee of the Right to Health and Medical Accessibility of Persons with Disabilities (Act on the Right to Health of Persons with Disabilities), enacted in December 2015 stipulates the “health screening project for persons with disabilities”; efforts were made at the national level to ensure customized health screening for people with disabilities [7]. Health screening items suitable for characteristics such as gender, sex, and life cycle should be designed. To do so means that it is necessary to identify the influencing factors related to the health screening of people with disabilities. Previous studies related to health screening for people with disabilities have been reported by Park et al. [ 8], Yoon [9], Kim et al. [ 10], and the National Rehabilitation Center [11]. According to a study on the health screening rate of people with disabilities, screenings were lower among women with disabilities, those of an older age, and those receiving medical aid; the higher the income, the lower the health screening rate, and there are differences in the health screening participation depending on the type and grade of disability. In particular, it is reported that the screening rate decreases as the degree of disability increase from mild to severe and if the mobility disability is greater. A study in the United States also reported that the higher the degree of disability, the lower the screening rate for diseases such as cervical cancer [12]. In addition, the screening rate of people with disabilities is lower than that of the general population [13]. People with disabilities have the same rights to healthcare as the general population. To improve the health screening participation rate, which is also emphasized in The 5th Policy Plan for people with disabilities in South Korea [14], it is necessary to identify related factors. For this study’s purpose, health screening is also applied as part of medical utilization and Anderson’s behavioral model of health service utilization is applied. We looked at the actual health screening participation behavior and tried to predict the factors that caused this behavior. Therefore, in this study, we tried to identify the status of health screening of people with disabilities and the factors affecting health screening by using the disability status survey, which provides sample statistical data for people with disabilities. The findings can help identify factors that affect the health screening of people with disabilities, as well as factors needed to improve the health screening rate. In addition, by identifying and addressing the factors influencing health screening by predisposing characteristics, enabling resources, and need factors, it is possible to grasp the current status of health screening for people with disabilities and re-examine it, providing evidence for follow-up tasks and research in the field of health for people with disabilities. This study aimed to examine the health screening rates of people with disabilities and the characteristics of those who did not undergo health screenings, and identify factors that affect health screening for people with disabilities. The specific research objectives were as follows: first, the sociodemographic characteristics of the people with disabilities were identified. Second, the general health screening rate of people with disabilities and reasons for not taking the examination were identified. Third, the characteristics of the predisposing characteristics, enabling resources, and need factors for general health screenings for people with disabilities and those who did not undergo health screenings were identified. Fourth, factors affecting general health screening of people with disabilities were analyzed. ## 2. Materials and Methods This analytical study used the 2020 Survey of People with Disabilities, (as secondary data) to identify factors that affect the health screenings for people with disabilities based on Andersen’s behavioral model (Figure 1) [15]. Andersen’s behavioral model is a conceptual model aimed at demonstrating factors that lead to the use of health services. According to the model, usage of health services (including inpatient care, etc.) is determined by three dynamics: predisposing characteristics, enabling resources, and need factors. Predisposing characteristics can be factors such as sex, age, and health beliefs. Need factors represent both perceived and actual need for health care services. The original model was expanded through numerous iterations and its most recent form models past the use of services to end at health outcomes and includes health screening [16]. ## 2.1. Participants and Analysis Data This study used data from the 2020 Insolvency Survey conducted by the Ministry of Health and Welfare and the Korea Institute for Health and Social Affairs [17]. This is reflected in Korea’s Social Welfare Act, which has been renewed every three years since the 2007 legal system. The 2020 Survey on Handicapped Persons with Disabilities comprises data on contact disabilities obtained by surveying 11,210 registered persons across 248 survey areas in Korea. It is representative data that used two-stage cluster sampling considering type, degree of disability, and age of the target disabilities group. A total of 7025 people participated in this survey, of which 365 people under the age of 19 were excluded, and 6660 people were finally analyzed. ## 2.2.1. Dependent Variable Among the survey items for people with disabilities in 2020, based on the question “Have you had a health screening in the past two years (2018–2020)?” was used [17]. This survey included comprehensive health examinations paid for by the individual, special health examinations at industrial sites (for workers exposed to hazardous substances), health examinations from the National Health Insurance Service (for the workplace or regional subscribers and medical benefit recipients), and free health examinations (including health screening by local governments other than the National Health Insurance Corporation). ## 2.2.2. Independent Variable The predisposing factors included sociodemographic variables such as sex and age, and social structural variables such as occupation and education, which the individual already possesses, regardless of his or her will. Education level was divided into elementary school, middle school, high school, and university graduation. Marital status was divided into married (having a spouse) and other categories (single, widowed, divorced, separated, single mother/unmarried father, etc.). Enabling factors satisfy the need for medical services by enabling individuals to use medical services, such as income and medical security benefits. The enabling resources in this study were subjective economic house status, national health insurance, and economic activity. In the case of economic activity, “Did you work for income? “ was identified through questions. Necessary factors are the pursuit of medical service because of the condition of the disease; in this study, the variables were disability type and grade, chronic disease, stress levels in daily life, feelings of sadness or despair, suicidal ideation, and suicide attempt. Concerning disability types, 15 categories were investigated in the survey: physical function disability, disability with a brain lesion, visual impairment, hearing impairment, speech impairment, intellectual disability, autistic disorder, mental disorder, kidney dysfunction, cardiac dysfunction, respiratory dysfunction, liver dysfunction, facial dysfunction, intestinal or urinary fistular, and epilepsy. However, these 15 disability types were adjusted to five considering the proportion: physical function disability, disability with a brain lesion, visual impairment, hearing impairment, and others considering the specific gravity. The ratings for each type of disability ranged from 1 to 6. Grade 1 refers to the most severe disability, while Grade 6 refers to the least severe disability. Usually, grades 1 to 3 represent people with severe disabilities, and grades 4 to 6 represent people with mild disabilities. ## 2.3. Data Analysis We used SPSS Window 26.0 for data analysis, and the significance level was set at 0.05. *The* general and disability-related characteristics of people with disabilities were analyzed by frequency, percentage, mean, and standard deviation. The relationship between the predisposing characteristics, enabling resources, and need factors of the participants and the health examination for people with disabilities were verified using a chi-square test. To identify the factors that affect health screenings of people with disabilities, a multiple logistic regression analysis was performed, which included predisposing characteristics, enabling resources, and need factors as independent variables. ## 3.1. General Characteristics Regarding the general characteristics of the participants, $59.1\%$ were male and $40.9\%$ were female, with a male-to-female ratio of 6:4. Regarding age groups, $8.7\%$ were aged 20–39 years, $28.8\%$ were aged 40–59 years, $48.3\%$ were aged 60–79 years, and $14.2\%$ were aged 80 years or older. Regarding education level, $38.9\%$ graduated from elementary school or less, $19.6\%$ graduated from middle school, $36.2\%$ graduated from high school, and $5.3\%$ graduated from college or higher (including junior college). Regarding marital status, $50.7\%$ were married and $49.3\%$ were in “other”. Regarding national health insurance, $71\%$ were enrolled in health insurance, $27.1\%$ in medical aid, and $1.8\%$ in others. Regarding subjective house economic status, $70.2\%$ of the participants belonged to “lower level”, $28.9\%$ to the middle level, and $0.9\%$ to the upper level, which showed that people with disabilities generally experience economic difficulties. Of the participants, $24.7\%$ said they were engaged in economic activities, and $75.3\%$ were not. Chronic diseases were present in $75.6\%$ of the participants and absent in $24.4\%$. The disability types were physical function disability ($26.6\%$), brain lesions ($11.9\%$), vision impairment ($11.7\%$), hearing impairment ($14.6\%$), developmental issues ($7.6\%$), and others (language, mental, and height problems; $27.6\%$). Disability grades were severe (grades 1–3; $49.4\%$) and mild (grades 4–6; $50.6\%$). The degree of stress in daily life was slight ($14\%$), moderate ($50.5\%$), and high ($35.5\%$). Of the participants, $19.8\%$, $12.3\%$, and $0.7\%$ people experienced sadness or hopelessness, suicidal thoughts, and suicide attempts, respectively; $80.2\%$, $87.7\%$, and $99.3\%$ did not experience sadness or hopelessness, suicidal thought, and suicidal attempts, respectively (Table 1). ## 3.2. Health Screening Participation Rates and Reasons for Not Participation Health Screening It was found that $69.1\%$ of people with disabilities underwent health screening. The main reasons for not undergoing health screening were “lack of symptoms and being considered healthy” ($32.9\%$), “convenience of transportation” ($20.4\%$), “others reasons” ($12.4\%$), “economic reasons” ($8.2\%$), and “lack of time” ($6.2\%$). In addition, there were opinions that responded: “Anxiety regarding health screening results”, “difficulty in communication”, “insufficient knowledge regarding health screening”, “insufficient facilities for people with disabilities in medical institutions”, “not having someone for the company when visiting a health screening institution.” There were also reasons such as “there is no reason” and “it is difficult to make a reservation for a screening institution” (Table 2). ## 3.3. Comparison of Factors According to Health Screening Status There were significant differences in health screening rates related to age, education level, marital status, subjective house economic status, chronic diseases, health insurance, economic activity, disability type and grade, depressive symptoms, suicidal ideation, and suicide attempts. Regarding age groups, 60–80-year-old ($52.8\%$) and 40–60-year-old ($28.9\%$) participants showed higher health screening rates than those aged 80 ($12.9\%$) and 20–40 years ($5.4\%$). The age groups reported elsewhere were 20–39, 40–59, 60–79, ≥80 years. Elementary school graduates ($37.7\%$) showed higher health screening rates than middle school ($20.9\%$), high school ($36.4\%$), or college ($5.1\%$) graduates. Health screening rates were higher for those with spouses ($56.6\%$) than those without a spouse ($43.4\%$), and the health screening rate was high in the group with low subjective house economic status. Regarding the existence of national health insurance, the health insurance group ($75.3\%$) had a higher health screening rate than those with medical aid ($22.8\%$), and the non-economically active group ($70.5\%$) had a higher screening rate than the economically active group ($29.5\%$). The health screening rate of those with chronic diseases ($77\%$) was higher than that of the group without chronic diseases ($23\%$), classified by disability type [physical disability ($29.1\%$); brain lesion disorder ($10.5\%$); visually impaired disability ($12.9\%$); hearing impairment disability ($15.6\%$)]. The screening rate for mild level ($56.7\%$) was higher than that for severe level ($43.3\%$) of people with disabilities. The health screening participation rate was high for people with disabilities that had relatively good mental health conditions, such as no depression ($82.3\%$), no suicidal ideation ($89.7\%$), and no suicide attempts ($99.4\%$) (Table 3). ## 3.4. Analysis of Influencing Factors Related to Non-Participating in Health Screening The results of the multi-logistic regression analysis on the nonparticipation of people with disabilities in health screening showed that age, education, marital status, type of medical insurance, economic activity, chronic diseases, degree of disability, and suicidal ideation were statistically significant at a significance level of 0.5 (Table 4). In terms of age, compared to those aged ≥80 years, the health screening rate in individuals in their twenties or thirties was approximately 2.1 times ($95\%$ CI = 1.4 to 2.9) lower. In terms of education, the probability of participation in health screening was 1.4 times lower for those with a lower education than for those with a higher education degree. The probability of not taking a health screening was approximately 1.3 times higher for people with disabilities without a spouse than for those with a spouse. Compared to national health insurance, the health screening participation rate of the medical aid group was approximately 1.2 times higher among those enrolled in health insurance schemes, and the rate of non-examination was twice as high among those who were not engaged in economic activities. Compared to those with physical disabilities, those with brain lesions and developmental disabilities were 1.6 times more likely to miss a health screening. The rate of non-examination for health screening was 1.4 times higher in cases of both no chronic diseases and severe disabilities. Those with suicidal ideation were 1.3 times more likely to fail health screening. ## 4. Discussion Research on health screening rates for people with disabilities is often conducted sporadically. In this study, factors affecting the nonparticipation rate in health screening for people with disabilities were classified into predisposing characteristics, enabling resources, and need factors. The study aimed to provide basic data for establishing programs and policies that can improve the rate of health screenings for people with disabilities by analyzing the factors that affect non-participation in health screening for people with disabilities. In this study, the health screening participation rate for adults with disabilities was $69.1\%$. Similar results were reported by Kim et al. which revealed a $70.2\%$ health screening rate for people with disabilities [10]. In addition, the results of this study were $4.5\%$ higher than the $64.6\%$ health screening rate of people with disabilities in the 2019 health statistics for people with disabilities published by the National Rehabilitation Center [18], which reflected the results of the national health screening. Because this study included private health screenings in addition to national examinations, the results were higher than those of the National Rehabilitation Center. However, in 2019, the health screening rate for people without disabilities in Korea was $74\%$ [18]. Therefore, the health screening participation rate of people with disabilities which was somewhat lower than that of the people without disabilities. A study in the United States also reported that people with disabilities had lower screening rates than those without disabilities [13,19]. Few studies have quantitatively and qualitatively identified health screening rates of people with disabilities; therefore, comparison with existing studies is limited, making health screening an urgent task for people with disabilities. The first reason people with disabilities do not participate in health screening is that they have no other symptoms and think they are healthy. The prevalence of chronic diseases with disabilities is reported to be $86.4\%$ [6]. Rather than waiting until the reason for visiting the hospital, it is necessary to detect and treat the disease early in an asymptomatic state and inform them of the need to improve their lifestyle. It has been found that uncomfortable transportation is a major barrier for people with disabilities, leading to non-participation in health screening. The government needs to establish a transportation system by expanding convenient mobility equipment in means of transportation, passenger facilities, and on the roads, and by improving the pedestrian environment, so that people with disabilities may travel safely and conveniently. In addition, a lack of information on health screenings, absence of guardians, and communication difficulties were found to be barriers to participation in health screenings for people with disabilities. For people with disabilities who have difficulty moving, policies such as ‘moving health screening service’ and ‘visiting health screening center’ are required for improvement. In this study, the health of people with disabilities was analyzed according to age groups identified in previous studies [18,20], subjective economic status, economic activity, and degree of disability [8,20,21]. There was a difference in health screening participation rates. Although not significant in this study, there was a sex-based difference in the health screening rates of people with disabilities [21,22] Compared to men, women with disabilities had a lower health screening rate, meaning that their health is more vulnerable. In addition, a health screening strategy for people with low gross house income and severe disabilities is required. The results of the logistic regression analysis to understand the influence of variables that affect the health screening participation of people with disabilities showed that age group, subjective economic status, economic activity, and degree of disability had a statistically significant effect on the health screening rates. Older age, better subjective economic status, and milder symptoms were found to have a positive effect on the health screening participation rate. On the contrary, health screening rates were low for those with younger age, poor subjective economic status, and severe disabilities. In addition, of non-participation rate in health screening was 1.2 times higher for those without a spouse (unmarried, widowed, divorced, separated, single mother/unmarried father, etc.) than for those with a spouse. This study has some limitations First, the survey data on the actual condition of people with disabilities depended on the participants’ responses to the question, “Have you had a health screening in the past two years?” In addition, it was not useful to segment and analyze various types of examinations, such as national general examinations, life transition period examinations, and cancer screening. Therefore, in the future, research identifying related factors with more diverse forms of examinations, such as health screenings during the transition period of life and cancer screenings, are required. Second, because the survey respondents were home-based people with disabilities, there could be limitations in representing all people with disabilities. Third, we cannot rule out that the critical variables of the factors affecting health screenings for people with disabilities are omitted because of the limiting variables. Various important variables, such as chronic disease status, region, and individual private insurance should be included. In this study, to increase the health screening participation rates for people with disabilities, age should be considered as a predisposing factor, economic level as an enabling factor, and severity of disability as a need factor. Based on these results, it is possible to improve the health screening rates of people with disabilities and establish health management and promotion policies to improve the health and happiness of people with disabilities, detect diseases early, and improve and promote current health conditions. Therefore, social and institutional support measures are required. In addition, appropriate rehabilitation services for people with disabilities are also required. ## 5. Conclusions This study identified the factors affecting the health screening of 6660 people with disabilities aged 20 years or older who responded to the 2020 Survey on People with Disabilities. It is commonly known that people with disabilities have poor access to medical services compared to people without disabilities, considering their poor health and low economic status. Therefore, although the need for preventive medical services, such as health screening, is much higher for people with disabilities, its current provision is lower than that for people without disabilities. This inevitably leads to an increase in medical expenses [23,24]. Thus, the government requires active planning and design. Recently, the government invited people with disabilities to undergo health screening without any inconvenience, but the response rate was low. *In* general, for people with disabilities to receive health screening, facilities, equipment, and time must be customized. Accordingly, the government is building customized screening centers for people with disabilities. In addition to providing basic health screening services for people with disabilities through health screening centers, specialized health screening items should be developed and disseminated. Health promotion and disease prevention for people with disabilities can be achieved through the provision of customized health screening services for each life cycle considering the characteristics of people with disabilities, and more active and voluntary participation by the concerned people in the health screening to monitor their health at the national level. Considered that continuous efforts are also necessary to achieve a more suitable screening system for people with disabilities.
# Blood Count-Derived Inflammatory Markers Correlate with Lengthier Hospital Stay and Are Predictors of Pneumothorax Risk in Thoracic Trauma Patients ## Abstract [1] Background: *Trauma is* one of the leading causes of death worldwide, with the chest being the third most frequent body part injured after abdominal and head trauma. Identifying and predicting injuries related to the trauma mechanism is the initial step in managing significant thoracic trauma. The purpose of this study is to assess the predictive capabilities of blood count-derived inflammatory markers at admission. [ 2] Materials and Methods: The current study was designed as an observational, analytical, retrospective cohort study. It included all patients over the age of 18 diagnosed with thoracic trauma, confirmed with a CT scan, and admitted to the Clinical Emergency Hospital of Targu Mureş, Romania. [ 3] Results: The occurrence of posttraumatic pneumothorax is highly linked to age ($$p \leq 0.002$$), tobacco use ($$p \leq 0.01$$), and obesity ($$p \leq 0.01$$). Furthermore, high values of all hematological ratios, such as the NLR, MLR, PLR, SII, SIRI, and AISI, are directly associated with the occurrence of pneumothorax ($p \leq 0.001$). Furthermore, increased values of the NLR, SII, SIRI, and AISI at admission predict a lengthier hospitalization ($$p \leq 0.003$$). [ 4] Conclusions: Increased neutrophil-to-lymphocyte ratio (NLR), monocyte-to-lymphocyte ratio (MLR), platelet-to-lymphocyte ratio (PLR), systemic inflammatory index (SII), aggregate inflammatory systemic index (AISI), and systemic inflammatory response index (SIRI) levels at admission highly predict the occurrence of pneumothorax, according to our data. ## 1. Introduction Trauma is the world’s top cause of disability and death in the first four decades of life. In this age group, the number of young adults who die from trauma exceeds all deaths from cancer combined [1]. Thoracic injuries are highly significant in patients with severe trauma, occurring in up to $50\%$ of patients with polytrauma [2]. According to the current literature, the mortality rate following thoracic trauma varies between 25 and $50\%$, depending on the associated injuries [3,4]. The assessment of thoracic trauma severity determines the choice of first therapy and the subsequent clinical course when treating patients with polytrauma. Although thoracic trauma specifically has received little attention in the literature, there is a wealth of information on the mortality-associated risk factors following trauma in general [5,6]. Pathological inflammatory and anti-inflammatory responses that occur in the first hours following extensive trauma are one of the major contributing factors to mortality in post-traumatic patients and remain challenging to control and distinguish from a physiological immune reaction [7]. The balance between these two antagonistic inflammatory responses, as predictors of outcomes in trauma patients, has received a lot of attention recently. In response to severe injury, patients frequently experience a variety of anomalies in their host defense mechanisms [8]. Systemic inflammatory response syndrome (SIRS) is the result of an unbalanced inflammatory response that escalates and releases an excessive amount of inflammatory mediators, such as IL-1, IL-6, IL-8, and TNF [9]. The injury burden is increased by the progression of such an uncontrolled cytokine cascade and hyperinflammation. This can lead to detrimental and frequently fatal events such as SIRS and multiple organ dysfunction syndrome (MODS) [10]. Recently, there has been a growing interest in developing a trustworthy biomarker that can assess the prognosis of patients with thoracic trauma [11]. The neutrophil-to-lymphocyte ratio (NLR) is one of the most accessible markers. This ratio has been proven to significantly predict the outcomes of patients with COVID-19 infection [12,13,14,15], cardiovascular diseases [16,17,18,19,20], and kidney disease [12,21] and oncology [22,23,24]. Another well-studied biomarker is the platelet-to-lymphocyte ratio (PLR), which has been shown to have excellent predictive value for the prognosis of patients in the fields of orthopedics [19,25,26] and trauma care [27,28,29,30]. Based on routine blood tests at admission, several other ratios can be calculated, such as the monocyte-to-lymphocyte ratio (MLR), aggregate inflammatory systemic index (AISI), systemic inflammatory response index (SIRI), and systemic inflammatory index (SII). The monocyte-to-lymphocyte ratio (MLR) has been proven to be a valid predictor of the occurrence of complication in strokes [31], and the outcomes and severity of hematological disorders [32] and oncological patients [33]. The NLR, PLR, and MLR, for example, have been the subject of a growing number of studies in recent years. However, their findings suggest that a combination of these ratios would increase their predictive value [34,35,36]. Thus, the aggregate inflammatory systemic index (AISI), systemic inflammatory response index (SIRI), and systemic inflammatory index (SII) were discovered and proven useful when evaluating the severity and prognosis of patients with various chronic and acute pathologies [37,38,39]. The prognosis ratios calculated from routine blood tests appear to be a helpful and cost-effective resource in trauma management. Although there are mentions in the literature of the correlation between the NLR and the outcomes of thoracic trauma patients [11], there are few to no papers published regarding the use of the PLR, MLR, SII, AISI, and SIRI as prognostic factors for the outcomes of patients with thoracic trauma. The purpose of this study is to establish the prognostic value of inflammatory biomarkers and the underlying risk factors in patients with thoracic trauma. ## 2.1. Study Design The present study was designed to be an observational, retrospective, analytical cohort study where we included all patients over the age of 18 who presented, were diagnosed with thoracic trauma, and admitted to the County Emergency Clinical Hospital of Targu Mureş, Romania, between January 2015 and December 2022. All patients included in our study underwent a radiological examination of either a conventional X-ray or a CT scan, and all were diagnosed with thoracic trauma as the main diagnosis. We excluded patients who passed away within the first 24 h, suffered severe bone fractures with need for specialized orthopaedical care, had a history of hematological or oncological disorders, presented thromboembolic events in the last two months, and patients with pneumonia. We also excluded patients suffering from mediastinal hematoma and aortic dissection as such patients are referred to the cardiovascular surgery department, not the thoracic surgery department. All patients included in our study suffered from peacetime injuries. We initially split the patients in two categories: “Pneumothorax” and “No Pneumothorax” based on the findings at admission. ## 2.2. Data Collection We collected the following data from our patients: age, sex, medical history (of diabetes mellitus—DM, arterial hypertension—AH, atrial fibrillation—AF, ischemic heart disease—IHD, myocardial infarction—MI, chronic obstructive pulmonary disease—COPD, peripheral arterial disease—PAD, chronic kidney disease—CKD, tobacco use, and obesity (BMI > 30)) and length of hospital stay (LOS). Moreover, we were interested in the routine blood tests at admittance. From these results, we extracted the following data: hemoglobin levels, hematocrit, neutrophil count, monocyte count, lymphocyte count, platelet count, sodium, and potassium. We were also interested in the number and location of rib fractures. All data were collected from the hospital’s integrated electronic database. ## 2.3. Inflammatory Biomarkers From the results of the initial blood test at admittance, we managed to calculate the following ratios: MLR = monocytes/lymphocytesNLR = neutrophils/lymphocytesPLR = platelets/lymphocytesSII = (neutrophils × platelets)/lymphocytesSIRI = (monocytes × platelets)/lymphocytesAISI = (neutrophils × monocytes × platelets)/lymphocytes ## 2.4. Study Outcomes The primary endpoint for our study was the risk of pneumothorax development. We also recorded the length of hospital stay as an outcome, making it our secondary endpoint. ## 2.5. Statistical Analysis Software-wise, we used SPSS for Mac OS (28.0.1.0) (SPSS, Inc., Chicago, IL, USA). All systemic inflammatory marker associations with category factors were evaluated using chi-square tests, whilst differences in continuous variables were evaluated using Student t-tests or Mann–Whitney tests. The receiver operating characteristic (ROC) curve analysis was used to determine the cut-off values for inflammatory markers and evaluate their predictive potential. Based on the Youden index (Youden index = sensitivity + specificity 1, ranging from 0 to 1), the suitable NLR, MLR, PLR, SII, SIRI, and AISI cut-off values were determined using the ROC curve analysis. ## 3. Results During our study period, we identified 611 patients suffering from thoracic trauma that met the inclusion criteria for our study. The mean age was 47.48 ± 18.66 (18–98) (Table 1). The majority of patients included were males (448, $73.32\%$), with 114 ($25.44\%$) of them suffering from pneumothorax at admission. At admission, 155 patients ($25.37\%$) presented with pneumothorax. The mean length of hospital stay was 6.73 ± 4.14 days. After splitting the patients into two lots depending on the occurrence of pneumothorax, we noticed an increase in the mean age for the “Pneumothorax” group to 51.68 ± 19.39 ($$p \leq 0.002$$), as well as a higher incidence of tobacco use ($$p \leq 0.019$$) and obesity ($$p \leq 0.038$$). As for the etiology of trauma, we found the majority of patients suffered from blunt trauma ($\frac{539}{611}$ patients, $88.22\%$). In this category, we considered all patients who suffered from motor vehicle accidents, workplace accidents, accidental falls, sport-related injuries, and suicide attempts. In terms of patients who experienced penetrating trauma, we included all patients who experienced hetero-aggression and stabbings. They accounted for $11.78\%$ of all patients and $41.93\%$ of pneumothorax patients. Moreover, patients who suffered from posttraumatic pneumothorax showed higher sodium levels ($$p \leq 0.024$$), higher neutrophil ($p \leq 0.0001$), monocyte ($p \leq 0.0001$), and platelet ($$p \leq 0.009$$) counts, and lower lymphocyte ($p \leq 0.0001$) counts. All hematological ratios were higher in the “Pneumothorax” group ($p \leq 0.0001$). The length of hospital stay was also longer in the “Pneumothorax” group ($$p \leq 0.003$$). The receiver operating characteristic curves of all hematological ratios were computed in order to assess if the initial values of these indicators were predictive for the occurrence of pneumothorax in patients with thoracic injuries (Figure 1). Table 2 displays the optimal cut-off value calculated using Youden’s index, the areas under the curve (AUC), and the prediction accuracy of the markers. In terms of systemic inflammatory makers and the length of hospital stay, we computed the Spearman correlation, and we identified a positive correlation between the NLR, SII, SIRI, and AISI and length of hospital stay (all $p \leq 0.05$), as highlighted in Figure 2. We proceeded with the multivariate analysis of age, risk factors, all inflammatory ratios, and the occurrence of pneumothorax within the patients in the second group, as shown in Table 3. Furthermore, older patients (OR:1.01, $$p \leq 0.02$$), the presence of COPD (OR:2.93, $$p \leq 0.02$$), as well as tobacco (OR:2.20, $$p \leq 0.01$$), act as predictive factors for pneumothorax risk. In contrast, obesity acts as protective factor against pneumothorax (OR:0.65, $$p \leq 0.03$$). We considered an increased value of the NLR as being a value higher than the identified cut-off (NLR > 6, $p \leq 0.001$). This is similar for a high MLR (MLR > 0.62, $p \leq 0.001$), PLR (PLR > 165.71, $p \leq 0.001$), SII (SII > 1632.86, $p \leq 0.001$), SIRI (SIRI > 6.17, $p \leq 0.001$), and AISI (AISI > 1479, $p \leq 0.001$). ## 4. Discussion According to the recent literature, thoracic trauma is a frequently occurring presentation in injured patients [40]. Post-traumatic pneumothorax is a common complication of chest injuries, occurring in between 20 and $55\%$ of patients, associated with relatively high morbidity and mortality. The mean age reported in the literature varies between 39 and 61 years old [41,42,43,44,45]. However, it is a preventable cause of death. Early diagnosis of pneumothorax can aid in the management of such patients, prevent hemodynamic deterioration, or occurrence of other complications. In the present study, the incidence of pneumothorax was $25.37\%$ ($$n = 155$$/611), with a mean age of 47.48 ± 18.66, are similar findings to those found in the literature. Most studies found in the recent literature report a negative impact on the outcomes of trauma patients among smokers [46,47,48]. In spite of all these findings, a recent paper published by Grigorian et al., which included 282,986 patients with chest injuries, reports a significantly better outcome in smokers, with a lower number of ventilator days ($$p \leq 0.009$$) and a lower rate of in-hospital mortality ($p \leq 0.001$). However, smokers appear to develop a higher rate of pneumonia ($p \leq 0.001$) [49]. In our study, we identified a total of 34 chronic tobacco users ($5.56\%$) and identified smoking as a negative predictor of outcomes, with a higher incidence of pneumothorax occurrence (OR = 2.29, $$p \leq 0.01$$). A plausible reason for this discrepancy can be attributed to the high proportion of smokers included in the study of Grigorian et al. totaling 57,619 patients ($20.4\%$). The role of obesity as a risk factor for the outcomes of trauma patients is a topic of debate in the current literature. There are plenty of papers, including complex meta-analyses, that advocate for poorer outcomes of obese patients following major trauma [50,51,52,53]. Some papers, however, found that obese patients suffering from trauma have a more favorable outcome with a faster recovery [54,55]. According to our findings, obesity is a protective factor for the development of pneumothorax in patients suffering from chest injuries (OR = 0.65, $$p \leq 0.003$$). One of the reasons for such paradoxical findings can be attributed to the protective role of the adipose tissue upon blunt chest injuries. The type of trauma appears to also play an important role in the development of pneumothorax. We notice that the majority of patients included in our study suffered from blunt chest injuries, which is to be expected as we did not have any wartime injuries reported in the past few years. We also notice that the majority of patients with penetrating trauma develop pneumothorax ($\frac{65}{72}$), but as the number of patients suffering from penetrating trauma is low, we can consider these data as purely observational. The predictive values of hematological ratios in trauma patients have reportedly been researched more and more, although with conflicting results. Additionally, there has been a significant rise in the need for prognostic tools in trauma patients with unfavorable evolution and decompensation. Our study included 611 patients diagnosed with thoracic trauma. We identified the inflammatory biomarkers in patient blood samples at admission and determined the presence of pneumothorax using CT scans at admission. Our study’s most important outcome is that the high baseline values for the NLR, MLR, PLR, AISI, SII, and SIRI are strong predictors for the development of post-traumatic pneumothorax. To the best of our knowledge, this is the first study to demonstrate that patients with high hematological ratios were more likely to develop pneumothorax and that the ratios predict a longer hospital stay. According to Soulaiman et al., there is a statistically proven association between the NLR at admission and the outcomes of trauma patients, where a higher NLR predicts an unfavorable outcome [8]. According to this study, the optimal cut-off value for the NLR at admission was 4, which is a close value to our findings, with an AUC = 0.63 ($70.3\%$ sensitivity and $56.4\%$ specificity), highlighting a satisfactory test quality. In comparison, we computed a cut-off value for the NLR of 6, with an AUC = 0.79, highlighting an increased test quality. In contrast, other studies, such as the one conducted by Dilektasli et al., revealed no statistically significant association between the NLR calculated from the blood samples at admission and the outcomes of trauma patients [56]. These controverted findings inspired another study, conducted by Younan et al. [ 57], to investigate the association between the NLR and the outcomes of trauma patients. According to the aforementioned, an increasing trajectory of the NLR (calculated at admission, and 24 and 48 h later) is strongly associated with the outcomes of the patients ($$p \leq 0.002$$) and length of hospital stay ($p \leq 0.001$). The total number of patients included in their study appears to be more modest (207 patients); patients with all types of trauma were included, not just chest injuries. Despite all these limitations, the findings of their study appear to support ours. According to Jo et al., the PLR has significant prediction power for the outcomes of trauma patients ($p \leq 0.0001$) [27]; however, they found a higher lymphocyte count in the non-survival group compared to the survival group (183.0 [141.0;230.0] vs. 227.0 [188.0;265.0]). The PLR was also lower in the non-survival group compared to the survival group (51.3 [32.3;77.9] vs. 124.2 [79.5;187.2]). These findings are contrary to ours, where the lower the lymphocyte count and the higher the PLR, the worse the outcome. A recent study by Rau et al. [ 58], including 479 trauma patients, found that comorbidities and hematological ratios (NLRs, MLRs, and PLRs) do not possess any predicting capabilities in the outcomes of such patients. Although some of their findings appear to contradict ours, we must remember that their study included all types of trauma and survival was considered as the final outcome of patients. The fact that a majority of the patients included in their study had suffered from a head or neck injury can be an explanation for their findings. Another reason for the lack of association between the hematological ratios and the outcomes of trauma patients can be attributed to the selection criteria. Their study also included patients who underwent invasive procedures, such as surgery, or patients who required resuscitation or blood transfusion, which are factors that can alter hematological ratios. We have taken into account these possible limitations of such reputable studies; this is the reason why our study’s main focus was thoracic trauma, with specific exclusion criteria. In the current study, according to the multivariate analysis, all the hematological ratios were able to predict the occurrence of pneumothorax ($p \leq 0.0001$ in all cases). Moreover, we proved that some increased hematological ratios can indirectly predict the occurrence of complications through an increased length of hospital stay (SII $$p \leq 0.022$$, $r = 0.093$; SIRI $$p \leq 0.008$$, $r = 0.108$; and AISI $$p \leq 0.009$$, $r = 0.106$). Lastly, the present paper also revealed a major risk factor for traumatic pneumothorax development in tobacco use (OR = 2.29, $$p \leq 0.019$$), whilst obesity is a protective factor (OR = 0.65, $$p \leq 0.038$$). The findings of our previous studies on the role of hematological biomarkers as predictive factors in the outcomes of both specific splenic trauma [29] and abdominal trauma [30] support the findings of the current paper. In the first paper, we found a significant association between the NLR and the severity of splenic injury ($$p \leq 0.02$$). The findings of the second paper revealed that the NLR, PLR, MLR, AISI, SII, and SIRI are powerful predictors of the development of acute kidney injury, mortality, and a composite endpoint of these two outcomes in abdominally injured patients ($p \leq 0.001$ in all cases). Nevertheless, the present study has a set of limitations. The first limitation relies on the design of the study as a retrospective monocentric study. Further improvement could be brought by extending the research to a multicentric prospective study. Secondly, due to the retrospective nature of our study, we were unable to gather enough data on chronic medications administered before admission (corticosteroids or anti-inflammatory drugs), which prevented us from assessing how various medications affect inflammatory biomarkers. Lastly, the study only analyzed the inflammatory biomarkers at admission. Repeated determination throughout the hospitalization period may better reflect the dynamics of the inflammatory process and may improve the quality of our findings. In spite of all these limitations, we consider our findings to be a stepping stone toward the development of new risk scoring systems for the improvement of the overall management of thoracic trauma patients and the early identification of patients at risk. We consider these hematological ratios to be especially important, taking into account their ease of determination and the low cost of assessment. ## 5. Conclusions Our data show that patients with thoracic injuries, who have elevated NLRs, PLRs, MLRs, SIIs, SIRIs, and AISIs at admission at values that are above our calculated cutoff, are likely to have sustained severe thoracic trauma, are likely to have developed pneumothorax, and will likely follow a long evolution with a long duration of hospitalization. Additionally, we proved that tobacco use is a strong predictor of the development of post-traumatic pneumothorax in such patients, whilst obesity is a protective factor. Given the ease of use of such ratios and the low cost of these metrics, they can be used in clinical practice to categorize patient treatment groups, develop predictive patterns, and classify risk groups for admission.
# Thyroid Profile in the First Three Months after Starting Treatment in Children with Newly Diagnosed Cancer ## Abstract ### Simple Summary Thyroid dysfunction during childhood may affect daily energy, growth, body mass index and bone development. Thyroid dysfunction may occur in children with cancer due to chemotherapy or other drugs, radiotherapy, the tumor itself or severe illness. The aim of this prospective study was to determine the percentage, severity and risk factors of changing thyroid hormone concentrations in the first three months of childhood cancer treatment. Subclinical hypothyroidism (normal thyroid hormones, with elevated thyroid stimulating hormone (TSH) according to age) was present in $8.2\%$ of children at diagnosis and $2.9\%$ of children three months after starting treatment. Subclinical hyperthyroidism (normal thyroid hormones, with lowered TSH values according to age) was present in $3.6\%$ of children at diagnosis and $0.7\%$ of children after three months. In $28\%$ of children, the concentration of free thyroid hormone (FT4) decreased by ≥$20\%$. We conclude that children with cancer are at low risk of developing hypo- or hyperthyroidism in the first three months after starting treatment but may develop a decline in FT4. ### Abstract Background: Thyroid hormone anomalies during childhood might affect neurological development, school performance and quality of life, as well as daily energy, growth, body mass index and bone development. Thyroid dysfunction (hypo- or hyperthyroidism) may occur during childhood cancer treatment, although its prevalence is unknown. The thyroid profile may also change as a form of adaptation during illness, which is called euthyroid sick syndrome (ESS). In children with central hypothyroidism, a decline in FT4 of >$20\%$ has been shown to be clinically relevant. We aimed to quantify the percentage, severity and risk factors of a changing thyroid profile in the first three months of childhood cancer treatment. Methods: In 284 children with newly diagnosed cancer, a prospective evaluation of the thyroid profile was performed at diagnosis and three months after starting treatment. Results: Subclinical hypothyroidism was found in $8.2\%$ and $2.9\%$ of children and subclinical hyperthyroidism in $3.6\%$ and in $0.7\%$ of children at diagnosis and after three months, respectively. ESS was present in $1.5\%$ of children after three months. In $28\%$ of children, FT4 concentration decreased by ≥$20\%$. Conclusions: Children with cancer are at low risk of developing hypo- or hyperthyroidism in the first three months after starting treatment but may develop a significant decline in FT4 concentrations. Future studies are needed to investigate the clinical consequences thereof. ## 1. Introduction Thyroid hormones are essential during childhood for adequate mental development, linear growth, bone development and metabolic regulation [1,2]. Signs and symptoms of thyroid dysfunction can be overweight, declining linear growth, mental retardation in the young, constipation (hypothyroidism), tachycardia and growth acceleration (hyperthyroidism), or fatigue and emotional imbalances (both). In children with cancer, thyroid dysfunction may present with symptoms that are regularly observed during childhood cancer treatment and thus may be overlooked. The thyroid gland can be damaged in children with any type of cancer by the tumor itself, chemotherapy (e.g., busulphan), radiation exposure or immunotherapy, resulting in thyroidal hypo- or hyperthyroidism [3]. In several small studies, the prevalence of primary hypothyroidism during cancer treatment varied between 0 and $18\%$ [4,5,6,7,8,9]. Next to damage of the thyroid gland, thyroid hormone metabolism in children with cancer may also be distorted due to damage of the hypothalamic–pituitary region as a consequence of a brain tumor or cranial irradiation (central hypothyroidism). Moreover, specific drugs may influence the thyroid profile without actual thyroid or pituitary gland damage, as is seen, for example, after the administration of asparaginase with a decrease in thyroxine binding globulin (TBG) concentration [8] or after the administration of corticosteroids with lowered thyroid stimulating hormone (TSH), triiodothyronine (T3) and TBG concentrations and increased reverse T3 (rT3) concentrations [8,9]. Lastly, thyroid hormone metabolism may change during childhood cancer treatment as a consequence of an adaptive mechanism during illness called ”euthyroid sick syndrome” (ESS) [10]. In this case, concentrations of thyroxine (T4) and T3 decrease due to two mechanisms, [1] downregulation of hypothalamic thyrotropin-releasing hormone (TRH) secretion and [2] changed activity of the liver deiodinases, resulting in decreased conversion of T4 into T3 and increased conversion of T4 into rT3 [11]. In children, EES has been described during severe illness and anorexia and is thus not associated with the underlying disease per se, but with its severity [12]. For the presence of ESS, different definitions are used, and in the few small studies that have been conducted, the prevalence of ESS during childhood cancer treatment, depending on its definition, varied between 0 and $100\%$ [4,5,6,7,8,9]. When children with cancer have hypo- or hyperthyroidism due to pituitary or thyroidal damage, this is considered a pathophysiological state and needs treatment. However, in case of acute illness, changes in the thyroid profile (ESS) are considered “physiological” and may even be protective. Therefore, it is not recommended to treat children who develop low thyroid hormone concentration during acute illness with thyroid hormone [13]. In children who develop mild central hypothyroidism after treatment for a brain tumor, a decline in FT4 of >$20\%$, even within reference ranges, was shown to be clinically relevant [14]. Although mild central hypothyroidism may not be comparable with ESS, it may be hypothesized that a prolonged decline in the FT4 concentration of >$20\%$ in children who are not acutely but “chronically” ill (such as during a two-year treatment period for childhood leukemia) does impact bone, muscle and body mass index (BMI) development or daily energy [15]. This has not been studied thus far. Because there is lack of studies reporting on thyroid hormone metabolism in large cohorts of children treated with cancer, we aimed to evaluate the percentage, severity and risk factors of a changed thyroid profile in children during treatment for cancer. ## 2.1. Patients We performed a prospective observational cohort during a two-year period (January 2020 to December 2021). The thyroid profile was measured at diagnosis and three months after starting chemotherapy or radiotherapy in newly diagnosed children (<21 years) with leukemia, lymphoma, sarcoma or a non-pituitary brain tumor at Princess Máxima Center for Pediatric Oncology. Children with known previous thyroid disease, Down syndrome, a thyroid cancer predisposition syndrome, a history of neck irradiation or meta-iodobenzylguanidine (MIBG) treatment, or a brain tumor in the hypothalamic–pituitary region were excluded. ## 2.2. Data Collection The thyroid profile, using TSH, FT4 and rT3, was measured at the time of diagnosis (range of ±35 days from diagnosis) and three months later (range of 60–160 days after diagnosis). Anti-thyreoperoxidase (anti-TPO) concentrations were measured at diagnosis. Blood results were interpreted by the treating physician. In case of aberrant thyroid function tests (FT4 < or > reference range or TSH <0.30 or >10 mU/L) children were referred to the pediatric endocrinologist and treated if needed. Clinical data on anthropometrics (height, weight and BMI), general well-being (body temperature, vomiting and nutritional status) and overall physical condition were extracted from patients’ electronic medical records on the day of blood sampling. Physical condition was scored as “good” (no complaints), “medium” (moderate complaints, “not feeling well” or “feeling tired”) or “poor” (severe complaints or “feeling ill”) as reported by the health care provider in the electronic patient chart. ## 2.3. Laboratory Assays A description of the laboratory assays is shown in Supplementary File S1. ## 2.4. Definitions Thyroidal hypothyroidism was defined as present if the plasma TSH concentration was above the reference range (5.0 mU/L), combined with a plasma FT4 concentration below the reference range. Thyroidal subclinical hypothyroidism was defined as present if the plasma TSH concentration was above the reference range (5.0 mU/L), combined with a plasma FT4 concentration within the reference range. Subclinical hyperthyroidism was defined as present if the plasma TSH concentration was below the reference range (5.0 mU/L), combined with a plasma FT4 concentration within the reference range. Central hypothyroidism was defined as present if the plasma FT4 concentration was below the reference range, combined with non-elevated TSH concentration in combination with non-elevated rT3 concentration. ESS was defined as present if the plasma FT4 concentration was below the reference range, combined with a non-elevated TSH concentration in combination with an elevated rT3 concentration. ## 2.5. Statistics Data are presented as means ±SDs or medians (ranges) for continuous data variables, depending on the distribution. Data are presented as percentages for categorical variables. Differences between groups were examined using unpaired Student’s t-tests for normally distributed continuous data and Mann–Whitney U tests for continuous data with a skewed distribution. For categorical data, χ2 tests or Fisher’s exact tests (if the assumptions for chi-square were violated) were used. Between-time-point differences were evaluated using paired Student’s t-test for continuous data with a normal distribution and Wilcoxon matched-pair signed rank test for continuous data with a skewed distribution. To assess the violation of normality distribution, QQ plots of the residuals and the Shapiro–Wilk test were used. For statistical analysis of changes in thyroid hormone concentrations, only paired blood samples per patient were used. The Pearson correlation coefficient was estimated to study the strength of linear associations between two continuous variables. Multivariable logistic regression analyses were used to estimate the association between covariates and two outcomes: elevated rT3 concentrations and ≥$20\%$ decline in FT4 concentrations. Independent variables included in the multivariable logistic regression were selected by estimating the univariate model and by considering the clinical relevance of each variable. Therefore, in the final regression model, not only variables that were significant in the univariate analysis were included, but also factors that were clinically relevant. Odds ratios (ORs) along with $95\%$ CIs are reported. Analyses were performed using SPSS, version 27.0. p-values of <0.05 were considered statistically significant. ## 2.6. Ethics The research protocol was approved by the medical ethical committee of Princess Máxima Center (NedMec NL69960.041.19). For ethical reasons, blood samples for the study were only taken if sampling for clinical reasons was simultaneously performed. Informed consent was given by all children and/or their parents/legal representatives depending on age. ## 3.1. General Patient Characteristics Of 519 children assessed for eligibility, 284 were included (Figure 1). Of the included children, 141 ($50\%$) were diagnosed with leukemia, 74 ($26\%$) with lymphoma, 38 ($13\%$) with sarcoma and 31 ($11\%$) with a brain tumor (Table 1). The median age at diagnosis was 9.4 years (range of 0.0–19 years), and $\frac{127}{284}$ ($45\%$) children were female. ## 3.2. Thyroid Profile At diagnosis, TSH and FT4 were both measured in 220 children, in $81\%$ ($\frac{179}{220}$) of which, both were within reference ranges (Table 2). Three months after diagnosis, in $91\%$ ($\frac{252}{276}$) of children, both TSH and FT4 concentrations were found to be within reference ranges. In two children ($1.2\%$), elevated anti-TPO antibodies were detected, and both were euthyroid. ## 3.2.1. (Subclinical) Hypo- and Hyperthyroidism At diagnosis, $8.2\%$ ($\frac{18}{220}$) of children had subclinical hypothyroidism with a median TSH concentration of 6.30 mIU/L (range of 5.00–11.00). In $3.6\%$ ($\frac{8}{220}$) of children, subclinical hyperthyroidism was found (median TSH of 0.21 mIU/L (range of 0.07–0.34)). Three months after diagnosis, $2.9\%$ ($\frac{8}{276}$) of children had subclinical hypothyroidism (median TSH of 6.75 mIU/L (range of 5.30–11.00)). None of these children required treatment with thyroxine. In total, 2 of 276 children ($0.7\%$) had subclinical hyperthyroidism (TSH, 0.31–0.33 mIU/L) after three months. ## 3.2.2. ESS At diagnosis, none of the children had ESS. After three months, $1.5\%$ ($\frac{4}{265}$) of children had developed ESS. In $33\%$ ($\frac{49}{148}$) of children, an isolated rT3 elevation was found at diagnosis (median rT3 concentration of 0.25 ng/mL (range of 0.22–0.58)) which increased to $50\%$ ($\frac{133}{265}$) after three months (median of 0.27 ng/mL (range of 0.22–2.36)). A significant, weak, positive correlation was found between the FT4 and rT3 concentrations three months after diagnosis ($r = 0.18$, $95\%$ CI 0.06–0.29). Children with an isolated elevated rT3 concentration after three months were slightly younger (7.7 compared with 9.6 years), more frequently had a brain tumor ($74\%$ versus $48\%$; $$p \leq 0.009$$) and were less often treated with anthracyclines ($65\%$ versus $80\%$; $$p \leq 0.006$$) than those without. No associations were found between corticosteroid use <48 h earlier or physical condition and having elevated rT3. In multivariable analysis, brain tumor diagnosis was the only significant risk factor for developing an elevated rT3 concentration three months after diagnosis (OR 3.17, $95\%$ CI 1.19 to 8.41) (Table 3). ## 3.2.3. Central Hypothyroidism After three months, $1.9\%$ ($\frac{5}{265}$) of children were suspected of having central hypothyroidism with lowered FT4 (median FT4 of 8 pmol/L (range of 8–9)), non-elevated TSH (median TSH of 2.80 mIU/L (range of 1.80–4.00)) and non-elevated rT3 concentrations (median of 0.17 ng/mL (range of 0.11–0.20)). All five had been diagnosed with leukemia at a median age of 5.4 years (range of 4.4–13.4). None was started on thyroxine treatment, but the thyroid profile was followed over time. ## 3.3. Decline in FT4 over Time Overall, the median FT4 concentration declined significantly in three months’ time from a median of 16 to 14 pmol/l ($p \leq 0.001$), with no change in TSH ($$p \leq 0.334$$). Median rT3 concentrations significantly increased (0.18 versus 0.22 ng/ml; $p \leq 0.001$) (Table 2, Figure 2). At time of diagnosis, $29\%$ ($\frac{82}{284}$) of children had received corticosteroids <48 h earlier or chemotherapy before the first measurement. In this group, at diagnosis, lower median TSH and a higher median FT4 concentration were found when compared with those who had not (TSH, 1.20 (range of 0.07–11.00) versus 2.30 mIU/L (range of 0.34–9.40); $p \leq 0.001$; FT4, 17 (range of 11–28) versus 16 pmol/L (range of 10–29); $$p \leq 0.017$$). In the 22 children who had received corticosteroids <48 h before the blood withdrawal after three months, no differences were found in either TSH or FT4 concentration. ( Supplementary File S2). Due to the differences found in median plasma TSH and FT4 concentrations in the children who had already received corticosteroids <48 h earlier or chemotherapy before their first thyroid hormone measurement at diagnosis, these children were excluded from the analysis of the changes in thyroid function over time. TSH and FT4 concentrations were found to significantly decline in three months’ time (median TSH from 2.35 to 1.90 mIU/L; $p \leq 0.001$; median FT4 from 16 to 14 pmol/L; $p \leq 0.001$). The median rT3 concentrations increased significantly (0.16 to 0.22 ng/ml; $p \leq 0.001$) (Table 2). The median overall change in FT4 concentration in children who had not received corticosteroids <48 h earlier or chemotherapy before the first measurement was −$11\%$ (range of −$47\%$ to +$100\%$). FT4 declines of ≥$10\%$, ≥$20\%$ and ≥$30\%$ were found in $41\%$ ($\frac{69}{136}$), $28\%$ ($\frac{38}{136}$) and $7.4\%$ ($\frac{10}{136}$) of children, respectively. In children with a FT4 decline of ≥$20\%$, the median FT4 concentration declined from 17 (range of 10–29) to 12 pmol/L (range of 8–16), with no changes in median TSH and rT3 concentrations. Of these children, $36.1\%$ had an elevated rT3 concentration after three months. The univariate analysis showed that children with a ≥$20\%$ FT4 decline were of similar age (7.7 ± 5.1 years versus 10.0 ± 5.7; $$p \leq 0.200$$), more often received antimetabolites ($84\%$ versus $67\%$; $$p \leq 0.049$$)) and showed a trend towards more frequent treatment with vinca-alkaloids ($92\%$ versus $80\%$; $$p \leq 0.081$$) compared with those with no decline or a decline of <$20\%$. The multivariable analysis, however, did not show risk factors for a ≥$20\%$ FT4 decline (Table 3). No clinically significant effect of a ≥$20\%$ FT4 decline from baseline on BMI SDS or linear growth was found. ## 3.4. Radiotherapy Radiotherapy was given to 21 ($7.4\%$) children in the three months, in seven children possibly including the thyroid gland, and in 20 children, possibly including the hypothalamic–pituitary region in the radiation field. In total, 18 of the 21 children were irradiated for a brain tumor, of which 7 were craniospinal tumors (medulloblastoma, $$n = 5$$ (total dose of 54.0 Gray), and ependymoma, $$n = 2$$ (total dose of 59.4 Gray)) and 11 were cranial tumors (high-grade glioma, $$n = 10$$ (total dose 13–60 Gy), and germ-cell tumor, $$n = 1$$ (total dose 40.0 Gray)). Three children were irradiated for a sarcoma ($\frac{2}{3}$ orbit, total dose of 45–50 Gray). Median FT4 in children with radiotherapy changed from 15 (range of 13–24) to 14 pmol/L (range of 8–23) ($$p \leq 0.034$$), while median TSH remained unchanged. Reverse T3 concentrations after three months were significantly higher in children who had received radiotherapy than those in children who had not (0.28 (range of 0.14–0.62) and 0.21 ng/mL (range of 0.10–2.36); $$p \leq 0.015$$). ## 4. Discussion In this large prospective study investigating the percentage and severity of thyroid dysfunction in children treated for newly diagnosed cancer, we found a low percentage of (subclinical) hypo- and hyperthyroidism in the first three months after starting treatment, which may be considered reassuring. In addition, the percentage of children that developed ESS, in this study defined as having lowered FT4, normal TSH and increased rT3, was low. However, in a considerable percentage of children, the thyroid profile was found to have changed, with an individual decline in FT4 concentration of ≥$20\%$ in $28\%$ of children after three months. We did not detect clinical consequences of this change in FT4 in this relative short period of time, and future studies are needed with prolonged follow-ups. Based on these results, we suggest that with the current treatment protocols, surveillance for hypo- and hyperthyroidism is unnecessary at this stage of treatment. However, our results do illustrate that the thyroid profile can severely change during cancer treatment in children, which may reflect adaptation to an altered metabolic state during illness or may be iatrogenic [16,17,18]. In ESS, the adaptive downregulation of TRH secretion may result in low-to-normal TSH concentrations with lowered thyroid hormone concentrations. Apart from this, in ESS, the alteration of liver deiodinases decreases the conversion of T4 into T3 and increases the conversion of T4 into rT3. In case of doubt between central hypothyroidism or ESS, the determination of rT3 may be used to differentiate them, as in true central hypothyroidism, rT3 is low, while in ESS, this is increased. The high percentage of isolated elevated rT3 concentration in our cohort may thus illustrate the presence of (mild) ESS, which may not be surprising, as these children undergo intensive treatment [19]. We could not correlate the rT3 increase to corticosteroid use, although $90\%$ of children had received different kinds of corticosteroids within the three months. Brain tumor diagnosis was found to be a risk factor for elevated rT3. Although no associations were found among poor physical state, corticosteroids and elevated rT3, it must be considered that brain tumor patients may have been in worse physical state compared with others, amongst others caused by cranial radiotherapy. No central hypothyroidism was found, as expected, because radiotherapy is unlikely to cause pituitary dysfunction after such a short period of time [20]. Van Iersel et al. showed that an FT4 decline of >$20\%$ during prolonged follow-up, although within reference ranges, was associated with weight gain, reduced linear growth and less improvement of intelligence scores over time in childhood brain tumor survivors [14]. This FT4 decline was regarded as a reflection of mild central hypothyroidism. Even though the etiology of declining FT4 as result of mild central hypothyroidism and (mild) ESS may not be comparable, we hypothesize that prolonged lowered thyroid hormone concentrations in (non-acutely ill) children with cancer may contribute to adverse late effects, such as short stature, weight gain, dyslipidemia, fatigue or the pathogenesis of early frailty, on childhood cancer survivors [14,21,22,23]. Therefore, we aim to follow thyroid hormone parameters in relation to these possible adverse late effects until the end of cancer treatment in this large prospective cohort. It is not recommended to treat children with thyroid hormone for ESS during acute illness [13]. When FT4 declines in time and remains lowered for a prolonged period in “chronically” ill children, this disease state may, however, be compared to adaptation of the hypothalamic–pituitary axes, which is also encountered in children with other chronic diseases. Examples of such diseases are cystic fibrosis or chronic kidney disease, whereby affected children develop low insulin-like growth factor-1 concentrations or delayed puberty due chronical illness [24,25]. In these situations, treatment with sex steroids or growth hormone to improve bone development and final height are considered [26,27]. With this in mind, thyroid hormone treatment might be beneficial in the situation of prolonged lowered thyroid hormones in children with chronic illness or prolonged disease. This question needs to be addressed in future studies. Our study also has several limitations. Firstly, the results might not be applicable to all children with cancer, because for this study, we only included children treated for leukemia, lymphoma, sarcoma or a non-pituitary brain tumor. Future studies may be performed to investigate changes in the thyroid profile in children with other types of childhood cancer. Secondly, although we aimed to measure the thyroid profile before any drugs had been administered, $29\%$ of the children had already received corticosteroids <48 h earlier or chemotherapy before the first thyroid hormone measurement. For optimal analysis, we, therefore, excluded these children from analysis on changes in TSH and FT4 concentrations. Moreover, data on physical condition were scored by the researchers in three categories, based on the notes of the health care provider in the electronic patient chart, which may be considered a subjective way of physical condition scoring and thus a limitation. ## 5. Conclusions Children with cancer, treated within current treatment protocols, do not seem to be at risk of hypo- and hyperthyroidism in the first three months of cancer treatment. In $28\%$ of children, however, the median FT4 concentration significantly decreased during cancer treatment. The long-term clinical consequences thereof have to be investigated in future studies.
# Blackcurrant Alleviates Dextran Sulfate Sodium (DSS)-Induced Colitis in Mice ## Abstract Previous studies have reported that anthocyanin (ACN)-rich materials have beneficial effects on ulcerative colitis (UC). Blackcurrant (BC) has been known as one of the foods rich in ACN, while studies demonstrating its effect on UC are rare. This study attempted to investigate the protective effects of whole BC in mice with colitis using dextran sulfate sodium (DSS). Mice were orally given whole BC powder at a dose of 150 mg daily for four weeks, and colitis was induced by drinking $3\%$ DSS for six days. Whole BC relieved symptoms of colitis and pathological changes in the colon. The overproduction of pro-inflammatory cytokines such as IL-1β, TNF-α, and IL-6 in serum and colon tissues was also reduced by whole BC. In addition, whole BC significantly lowered the levels of mRNA and protein of downstream targets in the NF-κB signaling pathway. Furthermore, BC administration increased the expression of genes related to barrier function: ZO-1, occludin, and mucin. Moreover, the whole BC modulated the relative abundance of gut microbiota altered with DSS. Therefore, the whole BC has demonstrated the potential to prevent colitis through attenuation of the inflammatory response and regulation of the gut microbial composition. ## 1. Introduction Inflammatory bowel disease (IBD) refers to a chronic inflammatory condition of the intestinal tract that increases health and economic burdens due to an increase in global prevalence and lowers the quality of life [1]. Ulcerative colitis (UC), one of the typical IBDs, appears only in the colon and is marked by supercritical mucosal inflammation [2]. A cross-sectional study of 15 countries in Asia and the Middle East reported that UC is twice as prevalent as Crohn’s disease and occurs more frequently in men in their 30s [3]. In addition, it is essential to treat UC because it can develop into colorectal cancer if it persists for a long time [4]. UC is characterized by diarrhea, bloody stools, urgency, increased frequency of defecation, and, in severe cases, fever and weight loss [5]. It is estimated that UC is caused by the disruption of intestinal homeostasis due to genetic, microbiological, immunological, and environmental factors including diet, smoking, and stress [5,6]. Drugs such as 5-aminosalicylic acid (5-ASA), biological drugs (anti-tumor necrosis factor-α (anti-TNF-α) and anti-adhesion molecule inhibitors), immunosuppressants, and corticosteroids have been used to treat UC [5]. However, it has been reported that the remission rate of UC is only $15\%$ to $44.9\%$, and adverse events such as infection, UC flare, nasopharyngitis, myelosuppression, liver toxicity, and malignancy occur [5,6,7,8]. Therefore, to develop other safe and effective treatments, natural products using polyphenols such as apigenin and curcumin, and polysaccharides such as *Scutellaria baicalensis* Georgi, are being studied [9,10]. Anthocyanins (ACN), belonging to the flavonoid subgroup of polyphenols, are found in flowers, vegetables, and fruits and are water-soluble pigments in red, blue, and purple [11]. Various health benefits of ACNs have been discovered, in particular, ACN supplements have been shown to improve gut health by modifying the gut microflora and enhancing the intestinal barrier, thereby reducing the potential risk of inflammation [11,12,13]. ACN-rich foods include berries (blackcurrants, blueberries, and raspberries) and dark red vegetables (red cabbage, eggplant, and purple wheat), among which blackcurrants have been reported to have a higher total ACN content than blueberries [11,14]. Blackcurrant (BC) has been suggested to possess various health effects, including prevention of obesity, improvement of cognitive impairment due to aging, and reduction of diabetes-related cardiovascular dysfunction [15,16,17]. Recently, ACN dietary supplements consisting of BC and bilberry extracts have shown anti-inflammatory effects in intestinal epithelial cells [18]. Additionally, silver nanoparticles based on BC extracts were observed to restore inflammation of induced colitis in mice [19]. However, these studies are insufficient to confirm the effect of BC on improving intestinal inflammation. Furthermore, most of these studies have verified the physiological activity of BC extracts, and studies on BC in its whole form are rare. Therefore, the aim of this study was to investigate whether the intake of whole BC in mice alleviates dextran sulfate sodium (DSS)-induced colitis. ## 2.1. Materials and Reagents The commercial freeze-dried powder of whole BC was obtained from Sujon Berries (Nelson, New Zealand). According to Willems et al. [ 2017], 1 g of Sujon’s BC powder contained 23.1 mg of anthocyanin, 0.9 g of carbohydrates, and 8.2 mg of vitamin C [20]. DSS was bought from MP Biochemical (MW: 36–50 kDa; Solon, OH, USA). A TNF-α enzyme-linked immunosorbent assay (ELISA) kit was bought from Invitrogen (Vienna, Austria), and interleukin (IL)-1β and IL-6 ELISA kits were purchased from R&D Systems (Minneapolis, MN, USA). The RNAiso Plus kit, PrimeScript RT Master Mix, and bicinchoninic acid (BCA) protein assay kits were purchased from Takara Bio, Inc. (Shiga, Japan). RIPA buffer was procured from Thermo Scientific Inc. (Rockford, IL, USA). Primary antibodies including phosphorylated-p65 (p-p65), p65, inducible nitric oxide synthase (iNOS), cyclooxygenase-2 (COX-2), and β-actin were purchased from Cell Signaling Technology (Danvers, MA, USA). ## 2.2. Animals The animal experiment was approved by the Animal Ethics Committee of Chungnam National University (IACUC approval number: 202112-CNU-214). Five-week-old male C57BL/6J mice were acquired from Central Lab Animal, Inc. (Seoul, Republic of Korea). The experimental design is illustrated in Figure S1. The mice were housed under the same conditions (temperature of 22 ± 2 °C, relative humidity of 50 ± $5\%$, and 12 h/12 h light/dark cycles) and acclimatized for six days. After the adaptation period, 24 mice were separated into three groups ($$n = 8$$ per group): Vehicle group, normal control group not treated with DSS; DSS group, DSS-treated control group; DSS + BC group, DSS and blackcurrant treatment group. In the DSS + BC group, BC powder diluted in phosphate-buffered saline (PBS) was orally administered at a dose of 150 mg/mice per day throughout the experimental period. The PBS dosage given to the Vehicle and DSS groups was the same as that given to the DSS + BC group. To induce colitis in the DSS group and the DSS + BC groups, $3\%$ DSS (w/v) in drinking water was given for six days from the 21st day of the experiment. One day before the experiment’s termination, DSS was replaced with normal water. Symptoms of colitis were monitored daily using the DAI (disease activity index) while DSS was administered. The DAI, which was slightly modified from what Peng et al. [ 2019] described, was measured as scores for body weight loss (0, none; 1, 1–$5\%$; 2, 5–$10\%$; 3, >$10\%$), stool consistency (0, normal; 1, slightly loose feces; 2, loose feces; 3, watery diarrhea), and bloody stools (0, none; 1, slightly bloody; 2, bloody; 3, gross bleeding) [21]. Feces were collected the day before the sacrifice. Mice were euthanized after fasting for 12 h. The blood and colon tissues were obtained after the experiment was completed. Blood was centrifuged at 1100 g for 15 min to obtain the serum. After measuring the length and weight of colonic tissue samples, some were fixed in $4\%$ formalin for histological assessment. The remaining colon tissues were immediately frozen in liquid nitrogen and kept at −80 °C until the experiment. ## 2.3. Histologic Analysis Hematoxylin and eosin (H&E) staining was accomplished on 4 μm thick sections of colon tissues fixed in $4\%$ formalin. Colon slides were examined using a light microscope (DM2500, Leica Microsystems, Wetzlar, Germany) installed at the Center for University-wide Research Facilities (CURF) at Jeonbuk National University (Jeonju, Republic of Korea). Histological damage to the colon tissue was evaluated by the scores of epithelium loss (0–3), crypt damage (0–3), depletion of goblet cells (0–3), and infiltration of inflammatory cells (0–3) [22]. ## 2.4. Measurement of Inflammatory Cytokine Levels Colon tissue was homogenized with lysis buffer, and the supernatant was separated. ELISA kits were used to quantify inflammatory cytokines (TNF-α, IL-1β, and IL-6) contained in the separated supernatant and serum, according to the manufacturer’s procedure. ## 2.5. Quantitative Real-Time PCR (qRT-PCR) Analysis qRT-PCR analysis was performed with reference to Song et al. [ 2021] and the instructions of the manufacturer of the reagent [15]. Following the manufacturer’s directions for the RNAiso Plus kit (Takara Bio, Inc.), total RNA was extracted from the colon tissue. cDNA was synthesized from total RNA using PrimeScript RT Master Mix (Takara Bio, Inc.). The TOPrealTM SYBR green qPCR Premix (Enzynomics, Daejeon, Republic of Korea) and a 7500 real-time PCR system (Applied Biosystems, Foster City, CA, USA) were used to carry out the qRT-PCR. The relative expression of the target gene was determined using the 2 −ΔΔCt method and normalized to that of the internal reference GAPDH. ## 2.6. Western Blotting Western blotting was carried out by referring to the experimental method of Jang et al. [ 2019] [22]. Total protein lysates were extracted by homogenizing the colon tissue in a radioimmunoprecipitation assay (RIPA) buffer containing protease and phosphatase inhibitors. The protein content of the supernatant obtained by centrifugation of the extract was quantified using a BCA assay kit. Loading buffer was added to the supernatant and inactivated at 95 °C for 10 min. Protein samples were electrophoresed on SDS–polyacrylamide gels and then transferred to polyvinylidene difluoride (PVDF) membranes. After blocking the membrane with $5\%$ skim milk, the antibody diluted to an appropriate concentration was applied for 24 h at 4 °C. After washing the membrane with tris-buffered saline with $0.1\%$ tween 20 (TBST), the secondary antibody was added, and the protein was identified using enhanced chemiluminescence (ECL) solution and the ChemiDoc system (ATTO LuminoGraph II, ATTO, Tokyo, Japan). The bands of the target proteins were quantified using Image J software (US National Institutes of Health, Bethesda, MD, USA) and normalized to β-actin. ## 2.7. Gut Microbial Community Analysis Song et al. [ 2021] and Jang et al. [ 2019] were referred to for fecal collection and gut microbiota analysis [15,21]. The day after the DSS drinking was completed, feces were collected and stored at −80 °C in order to analyze the gut microbial community. The microbial community of the collected feces was analyzed by Macrogen Inc. (Seoul, Republic of Korea). In summary, a library for 16S metagenomic sequencing was prepared by amplifying the V3–V4 region of 16S rRNA using the Hercules kit on the Illumina platform to construct a library of DNA extracted from fecal samples. The sequencing results were analyzed using the QIIME2 program, and taxonomic information classification was confirmed using the BLAST program of the NCBI 16S database. ## 2.8. Statistical Analysis Data were shown as the mean ± standard deviation (SD). Statistical analysis was performed using SPSS 18.0 software (SPSS Inc., Chicago, IL, USA). The significance of differences among groups was assessed using a one-way analysis of variance (ANOVA) by Duncan’s post hoc tests at $p \leq 0.05.$ ## 3.1. Effects of Blackcurrant on Clinical Symptoms and Colon Damage in DSS-Induced Colitis UC symptoms of colitis were identified as changes in body weight, disease activity index (DAI), colon length, and weight per length of the colon (Figure 1A–C). There was no significant difference in the change in body weight before DSS administration, but from the 6th day after DSS administration, both the DSS and DSS + BC groups were significantly reduced compared with the Vehicle control group (Figure 1A). Changes in DAI were checked daily during the DSS drinking period (Figure 1B). The DSS group showed a significantly higher DAI than the Vehicle group from the 22nd day. In contrast, the DSS + BC group showed significantly lower values than the DSS group until the 25th day. The DSS + BC group also showed an improved DAI on the final day of the experiment. The colon length was 4.73 ± 0.66 cm in the UC-induced DSS group, which was significantly shorter by about $29.9\%$ compared with 6.75 ± 0.33 cm in the Vehicle group (Figure 1C). In the DSS + BC group, colon length was 5.70 ± 0.42 cm, and a DSS-related decrease in colon length was significantly restored. In addition, the DSS + BC group showed a significantly reduced colon weight-to-length ratio. ## 3.2. Effects of Blackcurrant on Histological Changes in the Colon Tissue in DSS-Induced Colitis Sections of the colonic tissue were stained with H&E and histopathological scores were given to confirm the extent of damage (Figure 2A,B). The Vehicle group had no damage or inflammatory response to the mucosa, submucosa, crypt structure, or goblet cells in the colon. However, severe epithelial erosion, deficiency of goblet cells, destruction of the crypt structure, and infiltration of many inflammatory cells into the mucosa and submucosa were observed in DSS-treated mice. Supplementation with blackcurrant alleviated damage to the mucosal layer of colonic tissue and infiltration of inflammatory cells caused by DSS, and significantly reduced the histological damage score. ## 3.3. Effects of Blackcurrant on the Levels of Pro-inflammatory Cytokines in the Serum and Colon Tissue in DSS-Induced Colitis The levels of proinflammatory cytokines in the serum and colon are shown in Table 1. The DSS group showed significantly higher levels of serum TNF-α and interleukin (IL)-6 than the Vehicle group. The DSS + BC group showed significantly attenuated levels of serum TNF-α, which were elevated by DSS. In colon tissue, the levels of TNF-α and IL-1β in the DSS group were increased significantly compared with the Vehicle group. However, the levels of TNF-α and IL-1β increased by DSS treatment were significantly reduced in the DSS + BC. ## 3.4. Effects of Blackcurrant on the Nuclear Factor-Kappa-Light-Chain-Enhancer of Activated B cells (NF-κB) Signaling Pathway, Tight Junction (TJ) Proteins, and Mucin in DSS-Induced Colitis We investigated whether BC affects the expression of genes and proteins related to the NF-κB signaling pathway, mucin, and TJ proteins (Figure 3A–D). The DSS group upregulated the genes of toll-like receptor-4 (TLR-4) and nuclear factor-kappa-light-chain-enhancer of activated B cells (NF-κB) related to the NF-kB signaling pathway compared with the Vehicle group (Figure 3A). Furthermore, an increase in the expression of iNOS, COX-2, pro-inflammatory cytokines (TNF-α, IL-1β, IL-6), and monocyte chemoattractant protein-1 (MCP-1), which are downstream genes of NF-κB, was observed in the DSS group. However, the expression levels of these excessive mRNAs were inhibited in the DSS + BC group, with a value similar to those of the Vehicle group. Next, the effects of BC on the expression of genes encoding TJ proteins and mucin involved in barrier function were evaluated (Figure 3B). The DSS group significantly downregulated expression of all genes associated with TJ proteins and mucin compared with the Vehicle group. In contrast, the DSS + BC group showed higher expression of all such genes than the DSS group. The expression of proteins related to the NF-κB signaling pathway, an inflammatory response pathway, was also examined (Figure 3C,D). As a result, it was found that the phosphorylation of NF-κB p65 (p-p65) and the protein expression of its downstream enzymes, iNOS and COX-2, were significantly increased in the DSS group compared with the Vehicle group. However, the DSS + BC group was revealed to inhibit the overexpression of p-p65, iNOS, and COX-2 increased by DSS. That is, it was shown that the administration of BC decreased the inflammatory response by inhibiting the NF-κB signaling pathway activated by DSS in the colon. ## 3.5. Effects of Blackcurrant on Modulation of the Gut Microbiome in DSS-Induced Colitis The influence of BC on the diversity and relative abundance of the gut microbiome was analyzed (Figure 4). To confirm the α-diversity of the gut microbiota, the observed amplicon sequence variant (ASV), an index of evenness, and Chao1, an index of richness, were evaluated. There was no significant difference between all groups, but the α-diversity of the DSS + BC group tended to increase slightly compared with the DSS group (ASV; Vehicle, 116.00 ± 32.33; DSS, 106.80 ± 9.36; DSS + BC, 125.20 ± 36.53, Chao1; Vehicle, 117.61 ± 32.30; DSS, 108.66 ± 11.33; DSS + BC, 127.79 ± 37.95). Regarding the composition of gut microbiota, the DSS group showed a distinct alteration from that of the Vehicle group (Figure 4A–D). In taxonomic community analysis at the phylum level, Firmicutes and Actinobacteria were reduced in the DSS group compared with the Vehicle group, whereas Bacteroidetes and Verrucomicrobia were increased (Figure 4A). Meanwhile, the DSS + BC group was found to modulate the changes in the phylum caused by DSS. The abundance of Ligilactobacillus, Enterococcus, and Bifidobacterium at the genus level was high in the Vehicle group (Figure 4B). However, DSS treatment diminished these genera and elevated the levels of Bacteroides, Escherichia, and Akkermansia. BC decreased Bacteroides levels and increased Ligilactobacillus compared with the DSS group. Moreover, at the species level, the administration of BC was shown to regulate the change in microbial composition due to DSS (Figure 4C). As a result of analyzing β-diversity with a principal coordinate analysis (PCoA) plot to confirm the relative similarity in the gut microflora between each group, it was distinguished by the first principal component (PC1) between the Vehicle and DSS-treated groups (Figure 4D). Moreover, the DSS and DSS + BC groups were distinguished by the second principal component (PC2), and supplementation with BC tended to modulate the gut microbial community. ## 4. Discussion The cause of colitis is considered to be an imbalance in intestinal homeostasis due to the influence of genetic, microbiological, immunological, and environmental factors [5,6]. Natural products are being developed to treat UC, and ACNs are known to have positive effects on gut health [9,10,12,13,18,19]. Thus, the current study aimed to analyze how the beneficial effects of ACN-rich BC caused immunological and microbiological changes in the colon in mice with DSS-induced colitis. Indeed, a previous study reported that nonalcoholic steatohepatitis was prevented in mice fed a high-fat/high-sucrose diet containing $6\%$ whole BC powder, which was equivalent to consuming two cups of fresh BC per day in humans, for 24 weeks [23]. Based on a previous study, we explored the effect of oral administration of 150 mg/day (7.5 g/kg body weight (BW), total ACN content; 165 mg/kg BW) of whole BC powder to mice, which was less than the dose administered in the previous study. In addition, the anti-inflammatory effects in colitis mice induced by DSS when administered BC at this dose were confirmed as a result of this study. Chemical induction of colitis using DSS in mice is the most widely used method because it reflects clinical symptoms and histological changes observed in humans [6,24,25]. DSS, which has a highly negative charge, acts directly on colonic epithelial cells as a chemical toxin and damages them, resulting in the depletion of mucin and goblet cells, epithelial erosion, and ulcers [24,25]. Destruction of the intestinal epithelial layer also increases colonic epithelial permeability, allowing commensal bacteria and related antigens to infiltrate the mucosa, followed by infiltration of immune cells such as neutrophils [22,24,25]. Immune cells infiltrating the lamina propria and submucosa reportedly secrete pro-inflammatory cytokines and disseminate inflammatory responses to underlying tissues [24,25]. The results of this work revealed that, when colitis was induced with DSS, clinical symptoms such as a decrease in body weight and colon length, as well as an increase in DAI and colon weight, were observed. Furthermore, histological changes were observed after inducing colitis with DSS, including epithelial loss, crypt damage, depletion of goblet cells, and infiltration of inflammatory cells. In contrast, the administration of BC had no effect on weight loss but showed beneficial effects on other clinical symptoms and histological changes following colitis induction. In another study, the intake of 200 mg/kg BW of crude ACN isolated from the fruits of *Lycium ruthenicum* Murray had no effect on weight loss induced by DSS, similar to our results [21]. Previous studies also demonstrated that giving mice ACN-containing materials such as the water extract of maqui berry, ACN extracted from mulberry fruit and black rice relieved the pathological changes in the colon caused by DSS, like inflammatory cell infiltration and mucosal damage [26,27,28]. Additionally, when silver nanoparticles with a diameter of 213 nm based on blackcurrant extract were supplied to the DSS colitis mice model at a concentration of 2 mg/kg, only the macroscopic score and colon shortening were significantly improved [19]. Similar to the previous study, our study in which whole BC powder was administered also showed an improvement effect in these indicators, as well as a relieving effect in the colonic weight-to-length ratio. This difference is likely due to the difference in dose concentration. Damage to intestinal epithelial cells caused by DSS was reported to worsen the inflammatory response by increasing the generation of pro-inflammatory cytokines [9,25,29]. It was also reported that the levels of TNF-α and IL-6 were altered in the serum of mice with early-stage colitis induced by one week of DSS administration [29]. Elevated levels of pro-inflammatory cytokines due to colitis can be reduced by various polyphenols, including ACNs [9,13,22]. In this study, except for IL-1β in the serum and IL-6 in the colon tissue, DSS treatment increased the levels of other pro-inflammatory cytokines, whereas BC administration decreased these levels. It was reported that treatment with petunidin 3-O-[rhamnopyranosyl]-(trans-p-coumaroyl)-5-O-[β-D-glucopyranoside] (P3G), isolated from the fruits of *Lycium ruthenicum* Murray, reduced all pro-inflammatory cytokines in the serum, but there was no difference in IL-1β levels in the crude ACN-administered group compared with the DSS-treated group, as in our study [21]. When mulberry ACN was administered, the inhibitory effect on pro-inflammatory cytokines in the colon decreased all indicators at a high concentration (200 mg/kg BW), but there was no change, except for IL-1β, at a low concentration (100 mg/kg BW) [26]. The major ACNs in BC are delphinidin-3-rutinoside, cyanidin-3-rutinoside, delphinidin-3-glucoside, and cyanidin-3-glucoside, and each food item contains different types of ACNs [11]. Therefore, the difference in effects on weight loss and pro-inflammatory cytokines was presumed to be due to differences in the types and intake of different ACNs in food, and differences in UC mouse models and disease stages. Moreover, previous studies have shown that BC extract decreases inflammation-related cytokines in bone-marrow-derived macrophages and vascular tissue in mice with type 2 diabetes mellitus [17,30]. Similarly, in the present study, BC was observed to reduce the production of pro-inflammatory cytokines, even when consumed in the form of whole BC powder. Intestinal homeostasis is maintained by a barrier consisting of mucus, epithelial, and immune cells that prevent the penetration of bacteria and other antigens into the colon tissue [2,31]. DSS-induced loss of TJ proteins (ZO-1 and occludin) in mucus and mucin in the intestinal epithelial layers [21,26,31,32]. NF-κB is an inducible transcription factor that regulates the expression of genes encoding cytokines associated with immune and inflammatory responses and is involved in maintaining intestinal homeostasis [33]. When cells are stimulated externally through gut microbes, pro-inflammatory cytokines and toll-like receptors activate NF-κB (p-p65), which is known to be involved in the onset of inflammatory diseases by upregulating the expression of inflammation-related cytokines (TNF-α, IL-1β, and IL-6), chemokines (MCP-1), and inducible enzymes (COX-2, iNOS) [21,28,32,33]. In previous studies, the administration of ACN in mice with DSS-induced colitis and mice fed a high-fat diet increased the expression of factors related to mucin and TJ proteins in the colon, while downregulating the expression of target genes in the NF-κB signaling pathway [21,34]. In vitro, ACN-rich bilberry and BC extracts, as well as the 3-O-glucosides of cyanidin and delphinidin, have been shown to inhibit the activity of TNF-α-induced NF-κB in intestinal epithelial cells [18,35]. The results of this study demonstrated that BC intake enhanced the expression of genes related to mucin and TJ proteins in colitis-induced mice. Additionally, BC decreased the phosphorylation of the NF-κB subunit and downregulated the expression of NF-κB target genes and proteins, such as COX-2 and iNOS, which were shown to improve DSS-induced colitis. Many studies have reported that changes in the community structure of gut microflora are associated with the development of colitis [10,21,22,24,25,26,28]. In the DSS-induced colitis model, maqui berry extract and ACNs of mulberry and *Lycium ruthenicum* Murray changed the α-diversity of gut microflora [21,28], but BC did not change it significantly. However, it was confirmed that the treatment with BC had an effect on the β-diversity and gut microbial composition, which was distinct from that of the DSS group. Several studies using DSS-induced colitis mouse models revealed a reduction in Firmicutes and an increase in Bacteroidetes at the phylum level, and the intake of ACNs and flavonoids modulated their composition [36,37,38]. *The* genera Lactobacillus (some of the reclassified genera, Ligilactobacillus [39]) and Bifidobacterium in the colon, known to have beneficial effects on health in several studies, are reduced by DSS [22,28,40], and our results were similar. Similar to another chronic DSS animal study, this study observed that treatment with DSS increased the genus Akkermansia, and this increase was a positive correlation with IL-1β, a pro-inflammatory cytokine [40]. Although the genus *Akkermansia is* known to have anti-inflammatory effects, it is still controversial and more studies are required because its exact role in IBD is not known [41]. As a change in relative abundance at the species level, BC decreased Bacteroides acidifaciens, known colitis-associated bacteria, after DSS treatment, and increased Bacteroides caecimuris, which rose in the recovery phase after stopping DSS treatment [42]. In addition, BC administration tended to increase the abundance of Mucispirillum schaedleri, which has been reported to have a preventive effect against colitis caused by *Salmonella and* Alistipes putredinis, which decreases in IBD [43,44]. As such, BC modulated the composition of gut microbiota that was altered by DSS. However, further studies are required to investigate the precise mechanism for the role of gut microbiota in each in the alleviation of colitis by BC. ## 5. Conclusions The intake of whole BC powder has been shown to prevent clinical symptoms and histological destruction caused by colitis. BC was observed to attenuate the levels of pro-inflammatory cytokines in serum and colon tissues and enhance the gene expression of mucin and tight junction proteins. Additionally, it downregulated the expression of target proteins and genes involved in the NF-κB signaling pathway. Furthermore, BC showed the potential to alleviate the intestinal inflammatory response by modulating the composition of gut microbiota altered by DSS. Therefore, in this study, whole BC powder showed a protective effect against DSS-induced colitis by regulating the inflammation-related NF-κB signaling pathway and gut microflora, confirming its potential as a natural dietary material to improve UC.
# Prevalence of Malnutrition in Hospitalized Patients in Lebanon Using Nutrition Risk Screening (NRS-2002) and Global Leadership Initiative on Malnutrition (GLIM) Criteria and Its Association with Length of Stay ## Abstract [1] Background: Prevalence studies on hospital malnutrition are still scarce in the Middle East region despite recent global recognition of clinical malnutrition as a healthcare priority. The aim of this study is to measure the prevalence of malnutrition in adult hospitalized patients in Lebanon using the newly developed Global Leadership Initiative on Malnutrition tool (GLIM), and explore the association between malnutrition and the length of hospital stay (LOS) as a clinical outcome. [ 2] Methods: A representative cross-sectional sample of hospitalized patients was selected from a random sample of hospitals in the five districts in Lebanon. Malnutrition was screened and assessed using the Nutrition Risk Screening tool (NRS-2002) and GLIM criteria. Mid-upper arm muscle circumference (MUAC) and handgrip strength were used to measure and assess muscle mass. Length of stay was recorded upon discharge. [ 3] Results: A total of 343 adult patients were enrolled in this study. The prevalence of malnutrition risk according to NRS-2002 was $31.2\%$, and the prevalence of malnutrition according to the GLIM criteria was $35.6\%$. The most frequent malnutrition-associated criteria were weight loss and low food intake. Malnourished patients had a significantly longer LOS compared to patients with adequate nutritional status (11 days versus 4 days). Handgrip strength and MUAC measurements were negatively correlated with the length of hospital stay. [ 4] Conclusion and recommendations: the study documented the valid and practical use of GLIM for assessing the prevalence and magnitude of malnutrition in hospitalized patients in Lebanon, and highlighted the need for evidence-based interventions to address the underlying causes of malnutrition in Lebanese hospitals. ## 1. Introduction Nutritional risk and malnutrition are highly prevalent in hospitalized patients [1], and have been reported to range from 20 to $50\%$ in different European and South American countries with an average of $41.7\%$ worldwide [2]. There is abundant evidence that malnutrition is associated with increased morbidity, nosocomial infections and hospital readmission [3]. Recent studies have also demonstrated that malnutrition is associated with prolonged length of stay (LOS) in patients with acute illness or even chronic non-communicable diseases [4,5]. Consequently, malnutrition is identified as a major encumbrance for hospitalized patients and a driver of increased healthcare cost incurring a considerable economic burden, accounting for 2.1 and $10\%$ of the national health expenditures in Europe [6,7]. Nevertheless, malnutrition is still not addressed as a serious clinical problem due to the lack of clearly defined responsibilities and lack of unequivocally universally accepted diagnostic criteria [8,9]. Global efforts are being launched as well as a call to action to implement mandatory screening, establish a diagnostic code and develop national protocols to position nutrition as a healthcare priority [9,10]. Recently, the Global Leadership Initiative on Malnutrition (GLIM) has established a consensus for the diagnosis of malnutrition based on a combination of phenotypic and etiologic criteria and proposed it as a new tool to be validated in the disease-afflicted hospitalized population [11]. In the Middle East region, initiatives to study the prevalence of malnutrition in hospitals have been modest, with Turkey recently publishing a rate of $39\%$ [12]. An international multicenter study published in 2008 has reported a lower rate of $22\%$ of risk of malnutrition in two Lebanese hospitals [13]. Other prevalence studies in Lebanon have focused only on the rate of malnutrition in the community settings, with reported rates of $61.3\%$ malnutrition and a risk of malnutrition in older adults living in long-term care centers and lower rates of $48.3\%$ in older adults living in their homes [14,15]. ## Context of the Study Lebanon is a small country of the Middle East region covering an area of 10,452 km2 and having borders with both Syria and Israel, considered to be a conflict area. The country is divided into five main districts: north, Mount Lebanon, south, Bekaa Valley and the capital Beirut and its suburbs. In 2015, the population was estimated to be 6,847,712, including Lebanese people, foreign workers and refugees [16,17]. The highest population density is seen in Beirut and its suburbs. The south, north and Bekaa have the highest number of rural small villages. Lebanon has one hundred and forty-four hospitals comprising 11742 beds, of which $78.3\%$ are private and $21.7\%$ are public. The number of beds is distributed as follows: 3806 ($32.4\%$) in Mount Lebanon, 2452 in Beirut ($20.9\%$), 1931 ($16.4\%$) in the south, 1852 ($15.8\%$) in the north and 1701 ($14.5\%$) in Bekaa. The annual hospital admission is declared to be 698,210 cases per year, with the highest percentages in Beirut and Mount Lebanon, $22.3\%$ and $29.6\%$, respectively [18]. According to the World Bank, the gross domestic product was estimated at USD 23.1 billion in 2021 compared to USD 52 billion in 2019. The drop in GDP per capita was a drastic $36.5\%$ in just two years and Lebanon was reclassified as a lower-middle-income country instead of an upper-middle-income country. These drastic changes have resulted in difficulties in the cost of medical treatments and health coverage, which relies both on National Social Security and private insurances [17]. The aim of this study was to determine the prevalence of malnutrition in Lebanese hospitals by using the newly proposed GLIM tool, and to explore its different criteria and their relationship with length of stay, an easily measurable outcome parameter that is directly related to hospital costs [19]. The findings of this study will be the first milestone to establish a national policy mandating nutritional screening and assessment in all hospitalized patients. They can also guide the authority in forming a surveillance system and evaluating strategies targeted at decreasing the rate of malnutrition in hospitals. ## 2.1. Design and Sample Size The study is a cross-sectional, observational, multicenter study. The sample size was estimated as 330 hospitalized patients to achieve a $95\%$ confidence interval with a margin of error of 0.05 and $100\%$ expected response rate based on using the STEPS sample size calculator of WHO and on the number of yearly hospital admissions [18]. It was calculated considering a significance level of $5\%$ with $80\%$ power. The number of patients in a random sample of hospitals in the five districts of Lebanon was weighed against the number of admissions per district from the National Health Survey [18]. The distribution of samples according to districts to have a national representation is presented in Figure 1. Private hospitals were only included due to the restricted access to the public hospitals in the period of data collection. All adult patients, males and females aged 18 years and above, admitted to the different wards of the hospital during the period of data collection were recruited within 48 h of admission. Exclusion criteria included the following wards: gynecology (including all pregnant and lactating women), intensive care unit, psychiatry and short stay of less than 48 h. ## 2.2. Data Collection Patient characteristics, i.e., age, gender, admission diagnosis, history of previous admissions, underlying diseases and number of home medications, were recorded. Patients were interviewed for history of weight loss, appetite and record of food intake. C-reactive protein levels (CRPs) were retrieved from the available blood tests from patients’ records. The length of hospital stay was calculated from the date of admission to the date of discharge. Body weight and height were measured using the Detecto manual scale to the nearest 1 kg and 1 cm, respectively. BMI (weight kg/height m2) was calculated accordingly. Mid-upper arm muscle circumference (MUAC) was measured at the midpoint between the acromion and olecranon processes at the non-dominant arm using a non-stretchable tape measure to the nearest 0.1 cm. The MUAC was categorized into three groups: “normal”, “moderately depleted” for measurements <23 cm and “severely depleted” for those <20 cm [20]. Handgrip strength was measured with the non-dominant hand using the Saehan hydraulic hand dynamometer to the nearest 0.1 kg. The handgrip strength variable was categorized into two groups: “normal” and “low” accounting for the gender cut-off points being <27 kg and <16 kg for males and females, respectively [20]. ## 2.3. Nutritional Status The Nutrition Risk Screening (NRS-2002) tool was used for nutritional screening, followed by an evaluation of malnutrition using the GLIM criteria. NRS is a two-step tool consisting of evaluating BMI, assessing recent weight loss and changes in food intake and identifying a grading of severity of disease as a reflection of increased nutritional requirements. Patients with a total score of 3 or more in the final screening were nutritionally at risk [21]. GLIM diagnosis was performed as a two-step process by firstly identifying at least one phenotypic criterion and one etiologic criterion and secondly assessing the severity of malnutrition as being either “moderate” or “severe” based on the phenotypic criterion [22]. Weight loss and BMI were used to evaluate the phenotypic criteria. The third phenotypic criterion evaluated was muscle mass, using MUAC as the measurement and handgrip strength as the supportive measure. MUAC was used as a surrogate technique as endorsed in recent recommendations in usual situations where body composition techniques such as bioelectrical impedance analysis and dual-energy X-ray absorptiometry are not available in the hospitals [23]. GLIM criteria emphasize that handgrip strength should be used as an additional supportive measure when only anthropometric measurements are available [22]. Handgrip strength is commonly employed in practice to assess muscle function qualitatively [23]. Reduced food intake, chronic gastrointestinal condition affecting absorption and inflammatory condition assessed via CRP levels were the etiologic criteria. Cut-off points of the different etiologic and phenotypic criteria are described in Table 1. ## 2.4. Statistical Analysis Statistical analysis was performed using STATA V17.1. Descriptive variables were described as n (%), mean ± standard deviation (SD) and median ± interquartile range (IQR). Cohen’s kappa (κ) was conducted to assess the agreement between NRS 2002 and GLIM. The length of hospital stay variable was then dichotomized into two groups with the median of 5 days used as the cut-off point: group one: ≤5 days and group two: >5 days. Mann–Whitney U and χ2 tests were performed to assess the differences in the length of hospital stay and history of hospital readmissions between the malnourished patients and those of normal nutritional status. Spearman’s rank correlations coefficient (rho) was used to measure the association between the non-parametric variables of length of hospital stay, handgrip strength and MUAC. Multiple logistic regression analysis was used to determine whether malnutrition with the GLIM criteria was independently associated with length of stay with adjustments for gender and admission diagnosis. All reported p-values were to a significance level of $5\%$. ## 2.5. Ethics The study was completed in compliance with the guidelines of the Helsinki Declaration. The study protocol was reviewed and approved by the Institutional Review Board of the American University of Beirut (SBS-2020-0079). All participants reviewed and signed an informed consent form before participation. ## 3.1. Basic Characteristic A total of 343 participants were enrolled in this study from May to October 2021. Baseline characteristics and distribution among districts are presented in Table 2. The mean age was 60 years (SD: 17 years) and the majority of the participants were less than 70 years old ($65.89\%$). Surgical procedures ($32.94\%$) and infectious diseases ($27.7\%$) were the main diagnostic criteria for hospital admissions. ## 3.2. Prevalence of Malnutrition According to the NRS-2002 screening tool (Table 3), $31.20\%$ of the participants had scores that were greater than or equal to 3 and thus were identified as being “at risk of malnutrition”, of which $51\%$ were males and $49\%$ were females. Beirut ($38.27\%$) followed by the north ($38.00\%$) and Mount Lebanon ($33.00\%$) were the main districts identified by NRS-2002 as having participants at risk. The south had the lowest proportion ($18.97\%$) compared to Beirut and the result was statistically significant ($$p \leq 0.016$$). As for GLIM, $21.28\%$ and $14.29\%$ were identified as being “moderately” and “severely” malnourished, respectively, accounting for a total of $35.57\%$ malnourished participants (Table 3). Half of the malnourished patients were male and the same proportion was female. Similarly to the NRS-2002 results identifying patients at risk of malnutrition, Beirut ($43.21\%$), the north ($42.00\%$) and Mount Lebanon ($34.00\%$) were the main districts with malnourished participants (Figure 2). Bekaa had the lowest proportion ($25.93\%$) compared to Beirut and the result was statistically significant ($$p \leq 0.043$$). The strength of the agreement between NRS 2002 and GLIM in identifying at-risk-of-malnutrition and malnourished patients as per Cohen’s kappa κ was 0.7580 ($p \leq 0.001$), indicative of good agreement. ## 3.3. Frequency of the Different GLIM Criteria The frequencies of the different GLIM criteria among malnourished patients are described in Figure 3. Among the 122 patients who were identified as “moderately” and “severely” malnourished according to GLIM, the most dominant phenotypic criterion was “weight loss”, accounting for $82\%$. The median weight loss percentage was 8.5 kg (IQR 6.25–10). As for the etiologic criterion, the most prominent was “reduced food intake” accounting for $88\%$ of patients, among which reduction in food intake for a period exceeding 2 weeks was the main measure ($41.8\%$). The number of patients with low handgrip strength was 92 ($75.4\%$). The mean handgrip strength of the males was 19.59 kg (SD = 4.28), whereas that of the females was 12.61 kg (SD= 2.44). As for the MUAC, 32 patients were identified as being moderately depleted ($26.2\%$) and 10 patients were identified as being severely depleted ($8.2\%$), a total of 42 patients ($34.4\%$). The mean MUAC was 21.56 cm (SD = 0.7) and 20.2 (SD = 2.8) for males and females, respectively. More than half of the moderately malnourished patients had normal BMIs ($54.9\%$). ## 3.4. Association of Malnutrition, Muscle Mass and Length of Hospital Stay The patients’ median length of hospital stay was 5 days (IQR 3–10). There was a significant difference in the length of hospital stay between patients identified as malnourished according to GLIM criteria and those of normal nutritional status (11 days with IQR 9–15 versus 4 days with IQR 3–5, respectively, $p \leq 0.001$). When a median of 5 days was considered as the cut-off point, $90.9\%$ of malnourished patients had a length of hospital stay greater than 5 days compared to $9.1\%$ of patients of normal nutritional status, as shown in Table 4 ($p \leq 0.001$). Handgrip strength and MUAC measurements were negatively correlated with the length of hospital stay (rho/ρ = −0.40, $p \leq 0.001$ and rho/ρ = −0.25, $p \leq 0.001$), regardless of the patient’s nutritional status. Patients with low handgrip strength measurements had a length of hospital stay greater than the median of 5 days ($74.4\%$ versus $25.6\%$, $p \leq 0.001$). As for patients with moderate and severe depletion in MUAC measurements, $84.4\%$ had a length of hospital stay greater than the median ($84.4\%$ versus $15.6\%$, $p \leq 0.001$) (Table 4). ## 3.5. Multiple Logistic Regression of Length of Hospital Stay Having a malnutrition diagnosis was found to be an independent predictor of length of hospital stay, as shown in Table 5. Specifically, patients who were identified as malnourished according to GLIM criteria ($p \leq 0.001$) had higher odds of having a length of hospital stay that exceeded 5 days compared to those who were well-nourished. Age was excluded from the model because it was part of the malnutrition diagnosis. The Hosmer and Lemeshow goodness-of-fit test indicated that our model fit the data well with p-values of 0.2364. ## 3.6. Association of Malnutrition with Hospital Readmission Patients who were identified as being malnourished according to GLIM criteria ($33.61\%$) were more likely to have been previously admitted to the hospital in the past 3 months compared to those identified as having a normal nutritional status ($3.17\%$) (χ2 = 60.51, $p \leq 0.001$). ## 4. Discussion The prevalence rate of malnutrition risk among hospitalized patients was $31.2\%$ according to NRS-2002 and the prevalence of malnutrition according to the GLIM criteria was $35.6\%$. These figures is different from previous data collected in 2008 in two large Lebanese hospitals of the international multicenter study, where malnutrition risk was only screened and the rate was $22\%$ using the NRS-2002 tool [13]. In addition to the fact that our data are larger and more hospitals were included, this difference in rate reflects the increase in the risk of malnutrition in hospitalized patients in a country where economic crisis has drastically deteriorated. This crisis is affecting the access to and availability of nutrition care in hospitals [17]. The higher percentage of malnutrition according to GLIM was detected in the capital Beirut ($43.2\%$), where hospitals are larger and more complicated cases are admitted. A lower prevalence of $26\%$ was observed, on the other hand, in Bekaa where the population density is much lower [18]. The prevalence in the five districts is very similar to the rates reported in other countries, varying from $20\%$ to $50\%$ with higher ranges in developing countries [2,12]. One other recent study restricted to one hospital in Lebanon with a smaller sample size reported that $34.7\%$ of their sample population was at risk of malnutrition and $9.3\%$ were malnourished [24]. Although the percentage of at-risk patients is high, their lower rate of malnutrition is probably due to the use of a different tool, which was the Mini Nutritional Assessment MNA, specific to older adults [24]. The prevalence of risk of malnutrition when using NRS-2002 was slightly lower than the prevalence rate of the malnutrition diagnosis using GLIM criteria, reporting a rate of $31.2\%$. However, there was a good agreement statistically between the two tools. This concordance was also recently reported in a study on hospitalized patients in Turkey, where GLIM was correlated with NRS-2002 and not with other nutrition assessment tools [25]. Other studies have found a stronger correlation between GLIM and other screening tools such as the Malnutrition Universal Screening Tool (MUST), but the sample population was of older adults and those specifically having cancer [26,27,28]. Therefore, NRS-2002 is still considered to be a valid and more specific tool to be used for hospitalized patients during the screening process as recommended by clinical practice guidelines [29]. GLIM is considered to be a diagnostic tool to be used after screening to confirm nutritional assessment. It is different from other assessment tools as it has many different criteria and severity levels. In our study, we have studied the frequency of each phenotypic and etiologic criterion in patients diagnosed with moderate and severe malnutrition. The most frequent criteria were weight loss and low food intake, which are quick and easy to collect. This same combination of weight loss and low food intake was observed in a study on the validation of GLIM and was considered to be the most predictive with regard to worse clinical outcomes [30]. On the other hand, low BMI in our sample population was the least recorded criterion, with $16\%$ compared to $88\%$ for weight loss and $57\%$ for low muscle mass. More than half of malnourished patients had a normal BMI, reemphasizing the importance of not relying solely on BMI in nutrition assessment, an issue always challenged by clinicians [31]. Patients identified as malnourished by GLIM had a significantly longer length of stay (LOS) of 7 days and had significantly higher rates of previous hospital readmissions. Both LOS and the incidence of hospital readmissions are surrogate markers of a patient’s clinical outcomes and economical costs [32,33]. This strong correlation associates malnutrition with unexpected complications and a worsening clinical status of patients, highlighting the importance of identifying malnutrition early during hospitalization. The prediction model identifying malnutrition diagnosis as a predictor of length of stay independent of underlying diseases reinforced the association of malnutrition with worsening clinical outcomes. It demonstrates the validity of GLIM criteria to predict prolonged hospitalization as a health outcome [34]. Interestingly, a correlation with LOS was also found in our study with low MUAC and handgrip strength, independently of nutritional status. Handgrip strength has previously been linked to longer hospitalization but MUAC has never been studied from this perspective since it is commonly more used in the pediatric population [35,36]. Our findings may help in adding simple anthropometric measurements not requiring expensive tools such as MUAC in assessing muscle mass as part of GLIM criteria when body impedance analysis (BIA) or dual-energy X-ray absorptiometry DEXA are not available [37]. Our study findings of high prevalence rates support the need for increasing awareness towards malnutrition, which many global efforts are now targeting. Consequently, the newly developed European Nutrition for Health Alliance has started the Optimal Nutritional Care for All (ONCA) campaign, which launched a global call for action in 2013 to all countries to raise public awareness, establish a nutrition assessment pathway and develop national protocols to include effective nutrition care as a fundamental right to heath [16]. Other similar associations from different countries followed this path and launched an international call to action in a forum “Linking Nutrition Around the World” [9]. In addition, the United Nations Decade of Action on Nutrition emphasized that national policies should prioritize aligned health systems providing universal coverage of all essential nutrition actions [38]. Lebanon and other countries in the Middle East have not joined these global efforts yet. However, a national policy, supported by international instruments, is becoming a necessity to identify and target malnutrition, especially in the economic crisis that the country is going through. It is important to mention that initiatives and policies targeting malnutrition should recognize the crucial role of dietitians in the nutrition care of the patient [39]. Clinical dietitians are integral members of the multidisciplinary team in the hospitals and they are uniquely qualified in the assessment and the management of malnutrition in the care pathway of the patients [40,41]. They are specialized in interpreting anthropometric measurements, recommending nutrition support plans and providing informational counseling to patients [39,42]. Their nutrition interventions will aim to improve the continuum of care of the hospitalized patients in enhancing clinical outcomes. ## Strength and Limitations To our knowledge, this is the first study to report the prevalence of malnutrition in hospitalized patients in a national representative sample of hospitals in Lebanon and is one of the very few studies in the Middle East. Nutrition screening and assessment were conducted upon admission in a heterogeneous population of different medical and surgical diagnoses, making our study different from other prevalence studies conducted retrospectively and on a specific patient population. The GLIM tool that is newly developed was also used with simple anthropometric measurements that could be easily found in settings with minimal resources. Our study nevertheless has limitations. Data were collected from private hospitals only and public hospitals were excluded due to security reasons, meaning that patients admitted to these hospitals of usually lower socioeconomic status were not represented. The cut-off values we used for MUAC and handgrip strength to assess muscle mass were taken from consensus recommendations and were not validated in different patient populations. We therefore recommend that future studies clarify their cut-off values. ## 5. Conclusions Our present study reports a considerable high prevalence of malnutrition in hospitalized patients upon admission that was directly associated with a longer length of stay, implicating worsening clinical outcomes. Since the identification of malnutrition remains an important first step to target its recognition and management in daily clinical practice, the use of GLIM criteria with simple, affordable and anthropometric measurements is considered to be both valid and a practical diagnosis step.
# PGRMC1 Ablation Protects from Energy-Starved Heart Failure by Promoting Fatty Acid/Pyruvate Oxidation ## Abstract Heart failure (HF) is an emerging epidemic with a high mortality rate. Apart from conventional treatment methods, such as surgery or use of vasodilation drugs, metabolic therapy has been suggested as a new therapeutic strategy. The heart relies on fatty acid oxidation and glucose (pyruvate) oxidation for ATP-mediated contractility; the former meets most of the energy requirement, but the latter is more efficient. Inhibition of fatty acid oxidation leads to the induction of pyruvate oxidation and provides cardioprotection to failing energy-starved hearts. One of the non-canonical types of sex hormone receptors, progesterone receptor membrane component 1 (Pgrmc1), is a non-genomic progesterone receptor associated with reproduction and fertility. Recent studies revealed that Pgrmc1 regulates glucose and fatty acid synthesis. Notably, Pgrmc1 has also been associated with diabetic cardiomyopathy, as it reduces lipid-mediated toxicity and delays cardiac injury. However, the mechanism by which Pgrmc1 influences the energy-starved failing heart remains unknown. In this study, we found that loss of Pgrmc1 inhibited glycolysis and increased fatty acid/pyruvate oxidation, which is directly associated with ATP production, in starved hearts. Loss of Pgrmc1 during starvation activated the phosphorylation of AMP-activated protein kinase, which induced cardiac ATP production. Pgrmc1 loss increased the cellular respiration of cardiomyocytes under low-glucose conditions. In isoproterenol-induced cardiac injury, Pgrmc1 knockout resulted in less fibrosis and low heart failure marker expression. In summary, our results revealed that Pgrmc1 ablation in energy-deficit conditions increases fatty acid/pyruvate oxidation to protect against cardiac damage via energy starvation. Moreover, Pgrmc1 may be a regulator of cardiac metabolism that switches the dominance of glucose-fatty acid usage according to nutritional status and nutrient availability in the heart. ## 1. Introduction Heart failure is an emerging epidemic, and patients with reduced ejection fraction rates have a mortality rate of >$70\%$ [1]. Despite extensive studies on the epidemiology and risk factors, the mortality rate of heart failure remains high [2]. Malnutrition is a known risk factor for myocardial damage [3]. Clinically, individuals are exposed to malnutrition-mediated cardiac risks during surgery, sepsis, and some serious diseases [4]. Currently used drugs for cardiomyopathy, such as angiotensin-converting enzyme inhibitors or beta blockers, reduce vasoconstriction and decrease the risk of death [5]. However, improving the function of the heart itself will provide a more fundamental breakthrough in the treatment of energy-starved heart failure. ATP production is mainly derived from fatty acid oxidation in the heart [6]. Heart failure with hypertension or ischemia is accompanied by decreased cardiac fatty acid oxidation [7]. Similarly, glucose oxidation, another pathway for ATP production, is also suppressed in heart failure [8]. As a failing heart lacks energy due to decreased glucose and fatty acid oxidation, targeting cardiac energy metabolism is the main research focus of many studies [9]. Although subtypes differ between sexes, the overall heart failure risk is comparable between men and women [10]. Some beneficial effects of androgen and estrogen on heart failure have been previously reported [11,12]. While synthetic progestin is considered to have deleterious effects, the influence of progesterone or canonical progesterone receptors in heart failure is neither beneficial nor deleterious [13]. One of the progesterone receptors, progesterone receptor membrane component 1 (Pgrmc1), has been reported to suppress obesity/diabetes-mediated cardiac lipotoxicity [14]. Pgrmc1 is a non-canonical progesterone receptor associated with reproductive functions, such as decidualization [15] and female fertility [16]. Recent studies have revealed the metabolic function of Pgrmc1, beyond the reproductive relationships, in liver [17] and adipose tissue [18], focusing on the anabolism of glucose and lipids. Regulation of insulin, a major anabolic hormone, by Pgrmc1 has also been reported in the pancreas [19]. Although Pgrmc1-related anabolisms have been extensively studied, the mechanism of Pgrmc1-related catabolism remains ambiguous. Furthermore, the regulation of cardiac health by Pgrmc1 has been investigated only in the energy-enriched state in diabetes. In this study, we investigated how Pgrmc1-related catabolism affects cardiac health during energy starvation. Based on previous reports on the apoptosis and necrosis of cardiomyocytes during glucose starvation in vivo and in vitro [20,21], we used glucose starvation mouse models (72 h fasting) to mimic cardiac ischemia under physiological conditions in this study. Additionally, an adrenergic stimulation model using isoproterenol injection was introduced to induce energy starvation in the heart based on previous studies indicating lowered ATP production from ADP in the isoproterenol model [22]. Unlike the overnutrition state, Pgrmc1 loss increased fatty acid and pyruvate oxidation in the heart during malnutrition. Our results indicated that maintenance of the major energy production pathway protected the Pgrmc1-ablated heart from energy starvation-induced injury. ## 2.1. Animals Wild-type (WT) and Pgrmc1 global knockout (PKO) littermate mice [23] (8-week-old; C57BL/6 background) were grown in a pathogen-free facility at Chungnam National University under a standard 12:12 h light:dark cycle and fed standard chow diet with water provided ad libitum. The mice were fasted to starvation, and unexpected deaths during the experiment were recorded to assess the survival rate. Isoproterenol (230 mg/kg, subcutaneous) was injected for two weeks to induce adrenergic heart damage. To observe cardiac pumping in WT and PKO mice, fluorescent dye-labeled (DyLight 680 antibody labeling kit, Thermo Scientific, Waltham, MA, USA, 53056) bovine serum albumin (BSA) was intravenously injected into the mice. After 1 h, the mice were anesthetized and placed in an in vivo imaging system (IVIS; FOBI, Vancouver, BC, Canada). A video was recorded to observe cardiac pumping. Images of cardiac contraction/relaxation were also captured. All animal experiments were approved by the Chungnam Facility Animal Care Committee (CNU-00606) and adhered to their ethical guidelines. ## 2.2. Gene Expression Omnibus (GEO) Datasets Public datasets (GEO) were used to determine PGRMC1 transcription levels in patients with cardiomyopathy. GSE29819 and GSE36961 datasets were selected, and all patients were included in the analysis. ## 2.3. Comprehensive Laboratory Animal Monitoring System (CLAMS) CLAMS was used to assess the metabolic status of starved mice. Oxygen consumption (VO2) and carbon dioxide production (VCO2) rates were measured using an Oxymax system (Columbus Instruments, Columbus, OH, USA). Mice were placed at least 50 min before experiment for acclimation. The respiratory exchange ratio (RER) and respiratory quotient (RQ) were calculated as the ratio of VCO2 to VO2. The mice were fasted from midway through the light cycle to midway through the dark cycle. ## 2.4. RNA Isolation, Reverse Transcription, and Quantitative Reverse Transcription–Polymerase Chain Reaction (qRT-PCR) RNA pellets were collected from the hearts of mice and H9c2 cells using TRIzol, chloroform, and isopropanol. RNA pellet was washed with ethanol and dissolved in diethyl pyrocarbonate-treated water. RNA concentration was measured, and the same RNA amounts for each sample were used for cDNA synthesis using an Excel RT Reverse transcriptase kit (SG-cDNAS100; Smartgene, Daejeon, Republic of Korea). Real-time PCR was carried out using specific primers (Table 1), Excel Taq Q-PCR Master Mix (SG-SYBR-500; Smartgene), and Stratagene Mx3000P (Agilent Technologies, Santa Clara, CA, USA) in a 96-well optical reaction plate. Negative controls containing water instead of the sample cDNA were used in each plate. ## 2.5. Western Blotting Protein samples were resolved on 8–$12\%$ sodium dodecyl sulfate (SDS) polyacrylamide gels (running buffer: 25 mM Tris, 192 mM Glycine, $0.1\%$ SDS, and D.W.). After electrophoresis, the gels were blotted onto a polyvinylidene difluoride membrane (IPVH 00010; Millipore, Burlington, MA, USA) at 350 mA for 1–2 h with the transfer buffer (25 mM Tris, 192 mM Glycine, and $20\%$ (v/v) methanol). Membranes were blocked in $3\%$ BSA and incubated with primary antibodies overnight at 4 °C. Membranes were washed thrice with TBS-T to remove the excess antibodies and incubated overnight at 4 °C with the following secondary antibodies: goat anti-rabbit IgG horseradish peroxidase (HRP) (Catalog #31460) and goat anti-mouse IgG HRP (Catalog #31430; Thermo Fisher Scientific, Waltham, MA, USA) antibodies. After washing thrice with TBS-T, immunoreactive proteins were observed with ECL solution (Eta C Ultra 2.0; Cyanagen, Bologna, Italy) using a ChemiDoc system (Fusion Solo, Vilber Lourmat, Eberhardzell, Germany). The following primary antibodies were used: PGRMC1 (13856; Cell Signaling Technology, Danvers, MA, USA), ribosomal protein lateral stalk subunit P0 (RPLP0; A13633; Abclonal, Woburn, MA, USA), poly(ADP ribose) polymerase (PARP; 9532; Cell Signaling Technology), C/EBP homologous protein (CHOP; #MA1-250; Invitrogen, Waltham, MA, USA), β-actin (sc-47778; Santa Cruz, Dallas, TX, USA), glycolysis antibody sampler kit (8337; Cell Signaling Technology), pAMPK, tAMPK (9957; Cell Signaling Technology), LC3B (L7543, Sigma-Aldrich, St. Louis, MO, USA), and α-tubulin (66031-1-Ig; Proteintech, Rosemont, IL, USA). ## 2.6. Blood and Plasma Measurements For blood glucose measurement, the tail was snipped, and the blood glucose levels were measured using an Accu-Chek Active kit (Roche, Basel, Switzerland). During necropsy, blood was collected from the IVC. Plasma samples were analyzed to determine the levels of free fatty acids (FFAs; BM-FFA100, Biomax, Planegg, Germany), triglycerides (TGs; TG-1650, Fuji Film, Tokyo, Japan), and total cholesterol (TCHO; TCHO-1450). ## 2.7. Cell Culture All the cell culture reagents were purchased from Welgene (Gyeongsan, Republic of Korea). H9c2 rat cardiomyocytes were maintained in Dulbecco’s modified Eagle’s medium (LM001-05; Welgene) supplemented with $5\%$ (v/v) fetal bovine serum (FBS, Punjab, Pakistan), penicillin (100 U/mol), and streptomycin (100 μg/mL). To reflect the plasma profile of mice, cells were incubated with a low-glucose/fatty acid medium (500 mg/L glucose, 110 µM palmitic acid, 220 μM oleic acid) for 24 h. For Pgrmc1 knockdown/overexpression experiments, cells were incubated with Opti-MEM (31985070; Gibco; without FBS) for 0.5 h and treated with the siRNA/plasmid and lipofectamine 2000 (11668027; Thermo Fisher Scientific). The siRNA sequence used was: 5′-CAGUUCACUUUCAAGUAUCA-U-3′. Medium containing FBS was later added after 6 h. ## 2.8. Cardiac Fibrosis Measurement Tissues were fixed with neutral-buffered formalin, and trimmed tissues were washed with tap water. Tissues were subjected to serial dehydration and embedded in paraffin. The paraffin block was cut (5 μm) using a microtome, and the cut sections were attached to a silane-coated slide. Slides were immersed in xylene overnight and processed using a commercial kit (MST-100T; Biognost, Zagreb, Croatia), according to the manufacturer’s protocol, for Masson’s Trichrome staining. Regions of interest were observed under a light microscope. ## 2.9. Terminal Deoxynucleotidyl Transferase-Mediated dUTP Nick End-Labeling (TUNEL) Staining and Immunostaining Frozen tissues were embedded in an optimal cutting temperature compound and cut (8 μm) using a cryostat. Slides were dried overnight and washed with TBS-T. TUNEL assay (11684795910; Roche, Basel, Switzerland) was performed according to the manufacturer’s protocol. After 4′,6-diamidino-2-phenylindole staining, the region of interest was observed under a fluorescence microscope. For immunostaining, frozen tissue slides were dried overnight and heated in oven (65 °C) for 10 min. Slides were immersed in distilled water and subsequently TBS-T. After blocking with $3\%$ BSA, slides were incubated with primary antibody (CD31, ab56299; Abcam, Cambridge, UK) overnight at 4 °C. The next day, slides were washed with TBS-T and incubated with secondary antibody (A21202, Life Technologies, Carlsbad, CA, USA) for 4 h at room temperature. The region of interest was observed under a fluorescence microscope. ## 2.10. Statistical Analysis Data are reported as the mean ± standard deviation. Differences between means were analyzed via Student’s t-test and one-way analysis of variance followed by Tukey’s multiple comparison test using the Graph Pad Software (GraphPad Inc., San Diego, CA, USA). Statistical significance was set at $p \leq 0.05.$ ## 3.1. PGRMC1 Expression Is Associated with Energy-Starved Cardiomyopathy Using public clinical datasets, we collected data to investigate the relationship between PGRMC1 expression and cardiomyopathy. In GSE29819, both ventricles from patients with dilated cardiomyopathy showed lower PGRMC1 expression levels than those from non-failing donor hearts (Figure 1A). In GSE36961, the hearts of patients with dilated cardiomyopathy with left ventricular systolic dysfunction showed decreased PGRMC1 expression levels compared to those of normal individuals (Figure 1A). Interestingly, the expression levels of key enzymes involved in fatty acid oxidation and glycolysis were lower in the hearts of patients with dilated cardiomyopathy (Figure 1A). Through several in vitro and in vivo experiments, we attempted to delineate the effects of energy starvation on cardiomyocyte health. We induced energy starvation in H9C2 cardiomyocytes and mice via glucose starvation (glucose 0 mg/L, FBS $1\%$) and fasting (72 h), respectively. As shown in Figure 1B, cells under glucose starvation were predisposed to apoptotic cell death. Furthermore, hearts from mice under starvation (72 h) showed increased protein levels of apoptotic markers (cleaved PARP) and endoplasmic reticulum stress markers (CHOP) compared to those under resting conditions (Con) (Figure 1C). PGRMC1 protein expression was markedly suppressed by fasting (Figure 1C). These results indicate that PGRMC1 levels are closely related to energy starvation-induced cardiomyocyte injury. ## 3.2. Loss of PGRMC1 Maintains the Whole-Body Metabolism during Starvation Since there is no information on the physiological profile of PKO mice under starvation, we used CLAMS for comprehensive assessments. In CLAMS, VO2 levels were markedly reduced from 14 h fasting and reached baseline after 20 h fasting in WT mice. In contrast, VO2 levels were generally maintained at high levels in PKO mice during fasting. VCO2 levels showed a similar pattern as the VO2 levels. Levels of VCO2 markedly decreased after 14 h of fasting and reached baseline after 20 h of fasting in WT mice. In contrast, PKO mice maintained high VCO2 levels during fasting (Figure 2A). Additionally, the RER (VO2/VCO2) ratios were lower in PKO mice than in WT mice during prolonged fasting (Figure 2B). RQ calculation revealed that PKO mice are more likely to consume fat than glucose during prolonged fasting (Figure 2C). The heat production of PKO mice was highly maintained during fasting, notably from 14 h fasting, compared to that of WT mice (Figure 2D). The physical activity of PKO mice was also maintained during the prolonged fasting period, while that of WT mice was substantially diminished during the same period (Figure 2E). When mice were starved for a long period, some died unexpectedly due to an energy deficit. PKO mice were resistant to starvation-induced death compared to WT mice (Figure 2F). These results indicate that PKO mice are physiologically resistant to energy starvation. ## 3.3. Pgrmc1 Loss Increases Fatty Acid/Pyruvate Oxidation and Decreases Starvation-Induced Cardiac Injury To investigate how Pgrmc1 will affect the heart under starvation, WT and PKO mice were starved for 72 h and exposed to cardiac malnutrition. Blood glucose levels were at baseline in both starved WT and PKO mice, showing no difference between the two groups (Figure 3A). Plasma lipid profiles increased in starved PKO mice. Notably, plasma FFA and TG levels were significantly higher in starved PKO mice than in starved WT mice (Figure 3A). Heart weight (HW) decreased in starved PKO mice, while the ratio of HW per body weight (BW) was similar (Figure 3B). Western blotting showed that starved PKO hearts had decreased cleaved PARP levels, which is an apoptotic marker, compared to starved WT hearts (Figure 3C). Concordantly, PKO hearts showed seemingly increased cardiac contractions in the IVIS using fluorescence (Figure S1). Most hearts with hypertrophy or failure undergo metabolic alterations characterized by decreased fatty acid oxidation [24]. Fatty acid oxidation accounts for almost $70\%$ of cardiac energy production [25]. PKO hearts under starvation conditions showed significantly increased expression levels of mitochondrial fatty acid oxidation enzymes (carnitine palmitoyltransferase 2 (Cpt2) and very long-chain acyl-CoA dehydrogenase (Vlcad)) and peroxisomal fatty acid oxidation enzyme (acyl-CoA oxidase 1 (Acox1)) compared to WT hearts under starvation conditions (Figure 3D). Glycolysis is a rapidly induced cardiac metabolism process associated with heart failure [26]. PKO hearts under starvation had markedly decreased protein levels related to glycolysis (hexokinase (HK)-1, HK2, and pyruvate kinase M2 (PKM2)) (Figure 3E). Glucose oxidation accelerates cardiac function recovery following myocardial injury [27]. Likewise, dichloroacetate, a pyruvate dehydrogenase (PDH) activator, increases myocardial efficiency [28]. Cardiac PDH was higher in PKO than in WT plants under starvation conditions (Figure 3E). These results indicate that starved PKO hearts increase their main energy production and fatty acid/pyruvate oxidation and do not need to be exposed to metabolic alterations. As plasma FFA levels were highly maintained in PKO mice, it should be tested whether these metabolic alterations are influenced by the levels of physiologically induced substrates. To limit the influential factors in vivo, we introduced H9c2 rat cardiomyocytes and knocked down Pgrmc1 by siRNA. The cells were exposed to low glucose (500 mg/L) and fatty acids (palmitic acid (110 µM)/oleic acid (220 µM)). PGRMC1 protein levels were lower in the PK (Pgrmc1 knockdown) group than in the CK (control knockdown) group (Figure 4A). Cleaved PARP levels were lowered in PK group (Figure 4A). Metabolic alterations followed in vivo results. The mRNA expression levels of Cpt2, Vlcad, and Acox1 were higher in the PK group than in the CK group (Figure 4B). The protein levels of HK1 and HK2 decreased in the PK group (Figure 4C). PDH levels increased in the PK group (Figure 4C). Collectively, in vitro Pgrmc1 knockdown in low-energy cardiomyocytes induced fatty acid/pyruvate oxidation and decreased cellular injury. To investigate whether metabolic alterations in the PK group increased energy production compared to that in the CK group under energy deficit, we introduced a seahorse flux analyzer system to measure cellular respiration. H9c2 cells were knocked down and starved in a medium containing low glucose (500 mg/L) and fatty acids (palmitic acid (110 µM)/oleic acid (220 µM)). In the mitochondrial stress test, the PK group had a higher maximal respiration rate than that of the CK group (Figure 4D). We also measured the mitochondrial fusion/fission gene expression levels to assess the mitochondrial balance [29]. PKO hearts had a mildly increased fission gene (dynamin-related protein 1; Drp1) expression level compared to WT hearts (Figure S2A). These results confirm that fatty acid/pyruvate oxidation by PK increases energy production even under reduced glycolysis. ## 3.4. AMPK Activation Is Associated with Pgrmc1-Induced Metabolic Alteration in the Heart We investigated the possible mechanism of metabolic alterations induced by Pgrmc1. AMPK is a multi-functional protein kinase involved in the oxidation and uptake of metabolites [30]. Western blotting revealed that starved PKO hearts had increased phosphorylated AMPK (pAMPK) levels and decreased total AMPK (tAMPK) levels. Starved PKO hearts showed a higher p/t AMPK ratio than WT hearts (Figure 5A). In H9c2 cells, PK cells showed higher pAMPK and lower tAMPK levels than CK cells. Concordantly, PK cells showed an increased p/t AMPK ratio compared to that in CK cells (Figure 5A). Metabolic effects of AMPK activation and inactivation in cardiomyocytes were assessed. PGRMC1 levels were not directly regulated by AMPK activation because treatments with 5-aminoimidazole-4-carboxamide ribonucleotide (AICAR; AMPK activator) and compound C (Com C; AMPK inactivator) suppressed PGRMC1 expression. AMPK phosphorylation was increased by AICAR and decreased by Com C treatment (Figure 5B). HK1 levels were lowered by AICAR, whereas HK2 and PKM2 levels were increased by Com C. PDH levels were decreased by Com C (Figure 5B). In contrast, the expression levels of fatty acid oxidation enzymes were markedly increased by AICAR treatment (Figure 5C). Com C treatment decreased Cpt2 and Vlcad expression levels (Figure 5C). In summary, AMPK activation was related to the induction of fatty acid/pyruvate oxidation and decreased glycolysis. As Pgrmc1 loss increased AMPK activation and showed similar metabolic alterations to AMPK-activated cells, AMPK may be linked to metabolic modulation by PGRMC1 in starved hearts. ## 3.5. Pgrmc1 Ablation Protects the Heart from Isoproterenol-Induced Damage We introduced isoproterenol cardiac injury model according to previous studies [31,32]. Mice were injected with isoproterenol (five times, total 230 mg/kg, 14 days) and sacrificed (Figure 6A). Masson’s trichrome staining revealed that isoproterenol-WT hearts showed large positive areas with fibrosis (Figure 6B). In contrast, isoproterenol-PKO hearts showed decreased fibrotic areas compared with WT hearts (Figure 6B). Transforming growth factor-beta mRNA expression levels decreased in isoproterenol-PKO hearts (Figure 6C). As heart failure markers, mRNA expression levels of actin alpha 1 and brain natriuretic peptide were decreased in isoproterenol-PKO hearts compared to those in WT hearts (Figure 6D). In metabolic assessments, isoproterenol-PKO hearts showed higher levels of fatty acid oxidation enzymes (Cpt2) than isoproterenol-WT hearts (Figure 6E). Furthermore, isoproterenol-PKO hearts had decreased glycolysis enzyme levels and increased PDH levels. Additionally, isoproterenol-PKO hearts showed an increased p/t ratio of AMPK (Figure 6F). Hence, isoproterenol-PKO hearts had altered cardiac metabolism, such as fasting-PKO cardiac metabolism, increased fatty acid/pyruvate oxidation and AMPK phosphorylation, and decreased glycolysis. Maintenance of the ATP-producing pathway, i.e., fatty acid/pyruvate oxidation, may provide cardioprotection under ischemic injury. ## 4. Discussion Ischemic heart failure is prevalent worldwide [33]. Beyond traditional surgery, various methods using protein, cell, and gene therapeutics have been suggested for treatment [34]. Notably, several regulators of cardiac metabolism have been identified [35]. The heart relies heavily on long-chain fatty acids and utilizes glucose low-proportionally for energy production in the normal state [36]. Both fatty acid oxidation and glucose oxidation produce acetyl-CoA, which directly participates in the tricarboxylic acid cycle and electron transport chain and accounts for $95\%$ of myocardial ATP production [7]. In failing hearts, fatty acid availability substantially affects the myocardial function and efficiency [37]. Additionally, pyruvate oxidation, leading to the production of acetyl-CoA from glucose-derived pyruvate, is limited in heart failure, resulting in impaired ATP production [7]. Thus, failing hearts are etiologically or resultantly associated with impaired energy production via fatty acid/pyruvate oxidation. During cellular stress, AMPK phosphorylation downregulates fatty acid synthesis but upregulates fatty acid oxidation [38]. Although fatty acid oxidation itself can suppress pyruvate oxidation, AMPK activation increases glycolysis and pyruvate oxidation. Due to its diverse effects, whether AMPK improves or deteriorates the cardiac health may differ according to the physiological state of the patient [39]. AMPK has been reported to increase overall ATP production to respond to the energy demand and provide tolerance against cardiac ischemia [40]. When the hearts were exposed to fasting or isoproterenol-induced energy starvation, PKO increased AMPK phosphorylation. Catabolic activation by PKO differed according to metabolic pathways; fatty acid and pyruvate oxidation increased, but glycolysis decreased. Fatty acid oxidation takes place predominantly in the mitochondria and peroxisomes in less magnitude [41]. Mitochondrial fatty acid oxidation enzymes [42], namely Cpt2 and Vlcad, and the peroxisomal fatty acid oxidation enzyme [43] Acox1 increased in PKO hearts. The high availability of plasma fatty acids in PKO may influence catabolic processes. However, exposure to the same amount of fatty acids in in vitro experiment also increased fatty acid oxidation in PK cells. Conversely, Pgrmc1-overexpressing (POE) cells exhibited decreased fatty acid oxidation (Figure S3). Hence, an increase in the fatty acid oxidation pathway affects cardiac energy metabolism in PKO. Paradoxically, PKO hearts have decreased levels of glycolytic enzymes, hexokinases, and pyruvate kinase but increased PDH [44]. When cells are exposed to the same amounts of glucose and fatty acids, PK cells still increase pyruvate oxidation but suppress glycolysis. Similarly, POE cells showed a mild increase in glycolysis (Figure S3). We speculated that the lactate source must be induced to increase pyruvate substrate and pyruvate dehydrogenase in limited sources from glycolytic products. Our results (data not shown) also showed the induction of lactate dehydrogenase in starved PKO hearts. Further studies on the regulation of lactate metabolism by Pgrmc1 should be performed. Glycolysis only accounts for <$10\%$ [45], while the oxidation of fatty acids (50–$70\%$) [46] and pyruvate (20–$40\%$) [7] comprises the majority of cardiac ATP production. Hence, starved PKO hearts may have increased overall ATP production. Mechanistically, PKO hearts showed increased AMPK phosphorylation, and AMPK inhibitor (Com C) treatment resulted in the opposite cardiac metabolism pattern compared to that of PKO. In line with this, AMPK activator (AICAR) treatment showed a cardiac metabolism pattern similar to that of PKO. Concordantly, PKO-altered cardiac energy metabolism may be linked to AMPK phosphorylation during cardiac injury. We also measured the cardiac autophagy, as AMPK is an autophagy promoter [47], but observed significantly down-regulated LC3B levels in PKO hearts. As Pgrmc1 is an autophagy promoter [48], cardiac autophagy was mainly affected by Pgrmc1 compared to AMPK. This is in accordance with our results, as autophagy is up-regulated in ATP-depleted and ischemic hearts [49]. We insist on the interpretation of conflicting metabolic alterations and functions of PKO hearts in light of a previous study. In our previous study, PKO hearts in diabetic conditions showed increased TG and fatty acyl-CoA accumulation [14], leading to lipotoxicity. However, TG deposits play an ATP-providing role [50], and fatty acyl CoA is directly related to oxidative phosphorylation in the heart [51,52]. In contrast to overnutrition hearts, the large pool of lipids in PKO can be the ATP pool for energy-deficient hearts. Additionally, in our previous study, cardiac glycolysis was induced only in overnutrition PKO and slightly decreased in normal PKO hearts [14]. In the energy-deficient state, glycolysis was significantly decreased in PKO hearts. In contrast, fatty acid oxidation was decreased in normal and overnutrition PKO hearts [14] but increased in malnutrition PKO hearts. We concluded that cardiac metabolic alteration by Pgrmc1 depends on glucose availability. In re-fed and diabetic mice, blood glucose levels were approximately 200 mg/dL [14], which were higher than those in starved mice (approximately 60 mg/dL). Pgrmc1 may be a physiological switch that regulates the preference of cardiac substrates for ATP production depending on the body’s nutrition. In energy-deficit conditions, Pgrmc1 reduces oxidation of fatty acids/pyruvates, thereby limiting ATP production in the heart. The failing heart possesses a nearly $30\%$ ATP volume [53] and reduces the ATP-supplementing flux from the reserve (creatine kinase) by $50\%$ compared to the normal heart [54]. ATP depletion in the failing heart directly leads to contractile dysfunction because continuous ATP production/turnover is necessary for cardiac function [24]. Fatty acid oxidation is the major cardiac ATP-producing pathway, but it suppresses glucose oxidation, as per the Randle cycle [55]. Since glucose oxidation is a much more efficient ATP-production and less-oxygen-consuming pathway than fatty acid oxidation [28], its activation is therapeutically effective in a failing heart [56]. The fatty acid oxidation inhibitor etomoxir has been reported to exert cardioprotective effects by switching from energy metabolism to glucose oxidation [57,58]. However, adverse effects of fatty acid oxidation inhibition can also be observed in experimental/clinical reports [59,60]. Based on our results, Pgrmc1 inhibition increases both fatty acid and pyruvate oxidation and improves overall ATP production during energy starvation. Therefore, improvement in ATP-production via a Pgrmc1 inhibitor can be used as a novel therapeutic approach for energy-starved failing hearts. Additionally, PKO hearts reduced CD31 abundance in immunostaining (Figure S2C). This result is of clinical importance, as CD31 levels are markedly observed in the necrotic myocardium of deceased patients under ischemic heart disease [61]. Furthermore, CD31 blockade reduces damage in ischemia/reperfusion heart injury [62]. As Pgrmc1 promotes cellular processes of microvascular endothelial cells of the brain [63], further study is expected regarding Pgrmc1 and the cardiovascular system.
# Age Matters: The Moderating Effect of Age on Styles and Strategies of Coping with Stress and Self-Esteem in Patients with Neoplastic Prostate Hyperplasia ## Abstract ### Simple Summary This survey-based study has assessed the types of coping mechanisms and QoL of patients diagnosed with and treated for prostate cancer. Patients using active forms of coping, seeking support and planning seem to have higher self-esteem, while maladaptive coping strategies in the form of self-blame can cause a significant decrease in patients’ self-esteem. Our results show that older patients, despite the use of adaptation strategies, have lower self-esteem. Early psychological assessment and mobilization of patients’ personal resources may allow patients to change stress coping methods towards more adaptive forms. ### Abstract The aim of this study was to analyze coping mechanisms and their psychological aspects during the treatment of neoplastic prostate hyperplasia. We have analyzed strategies and styles of coping with stress and self-esteem of patients diagnosed with neoplastic prostate hyperplasia. A total of 126 patients were included in the study. Standardized psychological questionnaires were used to determine the type of coping strategy by using the Stress Coping Inventory MINI-COPE, while a coping style questionnaire was used to assess the type of coping style by using the Convergence Insufficiency Symptom Survey (CISS). The SES Self-Assessment Scale was used to measure the level of self-esteem. Patients using adaptive strategies of coping with stress in the form of active coping, seeking support and planning had higher self-esteem. However, the use of maladaptive coping strategies in the form of self-blame was found to cause a significant decrease in patients’ self-esteem. The study has also shown the choice of a task-based coping style to positively influence one’s self-esteem. An analysis related to patients’ age and coping methods revealed younger patients, up to 65 years of age, using adaptive strategies of coping with stress to have a higher level of self-esteem than older patients using similar strategies. The results of this study show that older patients, despite the use of adaptation strategies, have lower self-esteem. This group of patients should receive special care both from family and medical staff. The obtained results support the implementation of holistic care for patients, using psychological interventions to improve patients’ quality of life. Early psychological consultation and mobilization of patients’ personal resources may allow patients to change stress coping methods towards more adaptive forms. ## 1. Introduction Among neoplastic diseases, prostate cancer is one of the most frequently diagnosed noncutaneous cancers in the recent years. Only in the United States, in 2017, 160,000 men were diagnosed with prostate cancer [1]. In Poland, prostate cancer is responsible for nearly $9\%$ of all cancer-related deaths. In comparison to Europe, where the 5-year survival rate is $83.4\%$, Poland holds a much lower percentage of $66.6\%$ [2]. Symptoms associated with its diagnosis include pain and worsening of physical condition, and these are present in more than $50\%$ of patients. The choice of prostate cancer treatment option is complex; however, the quality of sexual life is an important aspect that influences patients’ quality of life. Radical prostatectomy (RP) often negatively influences the sexual functioning, which contributes to an impaired sense of masculinity [3]. Radical prostatectomy and hormone therapy contribute to the loss of sexual function, causing the feeling of confusion and disorientation in patients. [ 4]. Previous studies have shown patients who experience emotional disorders and distress to be at a higher risk of poorer treatment outcomes, and have lower adherence to treatment plan, making the overall prognosis poorer than in emotionally stable patients [5]. It is estimated that $30\%$ of prostate cancer patients experience some form of emotional distress, defined as general suffering, and $10\%$ experience severe depression [6]. Studies have indicated that men diagnosed with prostate cancer experience emotional disturbances two to five times more often when compared to the general population [7]. Adequate social support is an important factor for reducing anxiety and depression. Its lack contributes to the deterioration of the quality of life. Patients coping with the disease on their own were found to more frequently experience depression and have worse mental well-being [8]. As for the body image issues present among cancer patients, Serbia et al. showed that psychological intervention conducted in women with breast cancer has influenced their adaptive approach to their bodies. Not only did the patients begin to view their bodies in a more positive way, but, also, their self-confidence and willingness to cooperate has increased. The results of the study indicate that patients’ approach can be dynamically changed under a psychological intervention, if properly conducted. The initial reluctance of patients to have contact with their bodies transformed into no difficulties upon physical contact with the body parts affected by surgery. Due to the mix of social and biological factors, symptoms of depression may be masked by unhealthy coping behaviors manifested in the form of psychoactive substance abuse, dangerous car driving or casual sexual contact, and, thus, are more difficult to diagnose [9]. From the time of diagnosis through the entire treatment process, patients experience strong emotional stimuli that may negatively affect their well-being and hospitalization. An optimalization of medical treatment and quality of life of patients with cancerous prostate hyperplasia is one of the greatest challenges the modern healthcare system has to face. Patients subjected to long-term stress exposure were proven to have a weakened immune response as well as more frequent metastasis formation and recurrences of the disease [10]. According to Dropkin’s definition, body image is the changing perception of one’s own appearance, functions and sensations. The experiences related to the changes in body image occur mostly on a subconscious level. Patients, after surgical prostatectomy, were found to experience pain and were surprised by the changes related to the outlook and function of the penis. Studies also indicate unfavorable changes caused by hormonal disorders, which contribute to increasing the marital distance and deterioration of the relationship [11]. A common belief that only older men are affected by prostate cancer poses another problem when it comes to patient treatment. In younger patients, the perspective of losing full sexual and physical activity may contribute to significant reduction of the quality of life. Studies show that older patients, despite the general health deterioration by prostate cancer, can maintain their subjective well-being and immunity at a relatively satisfactory level [12]. When discussing with their doctor, patients are reluctant to talk about the deterioration or loss of sexual function associated with the treatment process. Lack of a sensitive intimate issue discussion often led to social isolation, negatively impacting their family life [13]. Studies evaluating the differences between different coping strategies between patients of different ages have indicated worse functioning among younger patients. Due to the cancer diagnosis, they are often forced to revise their life plans. Moreover, they often experience loss of self-independence and economic difficulties. However, younger patients tend to have greater psychological resources that can be used to actively and confrontationally deal with cancer diagnosis and treatment [14,15]. There is a limited amount of research on the moderating effect of age in the context of strategies and styles for coping with stress and self-esteem in patients with prostate neoplastic hyperplasia. Its better understanding may contribute to changes in the recently used strategies. Demonstration of different coping strategies among patients of different ages will allow for a more efficient psychological intervention, integral for treatment. The objectives of this study were to:Assess stress coping strategies in relation to patients’ self-esteem. Assess stress coping styles in relation to patients’ self-esteem. Identify the predictors of stress coping styles and strategies that determine patients’ self-esteem. Determine the influence of patients’ age as moderator of the relationship between self-esteem and ways of coping with stress. ## 2. Materials and Methods We have conducted a cross-sectional single center study to analyze self-esteem and stress coping strategies among patients diagnosed with and treated for prostate cancer. The study included 140 patients who were qualified by a multidisciplinary board for radical prostatectomy from June to December 2021. The board consisted of oncologists, urologists, radiotherapists, cancer coordinators and a psychooncologist, who worked at the urology department of Pomeranian Medical University. The qualification was based on the results of biopsy and diagnostic imaging. Patients qualified for other treatment options including radiotherapy and/or hormone therapy were excluded from the study to maintain the homogeneity of the study group. As Polish language was the mother tongue used by all of the patients, Polish adaptations of the questionnaires were used for the study purpose. The questionnaires were provided by the psychologist at the time of hospital admission, as the patients were awaiting their surgery. All patients were provided with a proper explanation of the study and were given a possibility to withdraw at any timepoint of the study. Patients completed the questionnaires on their own in a hospital room. The questionnaires were handed in an envelope. Having filled the forms, patients were asked to seal them in an envelope and return them to the researcher. All patients have signed the informed consent form. Participants who refused to sign the informed consent form or did not fill the questionnaires completely were removed from the study. A total of 140 study participants were provided with the questionnaires, of whom 126 have returned fully completed forms. Patients were asked to fill the following questionnaires: a demographic data questionnaire, the Coping Inventory for Stressful Situations (CISS), the Rosenberg Self-Esteem Scale and the Mini-COPE questionnaire. The demographic questionnaire consisted of 9 questions asking for patients’ age, place of residence, education, marital status, children, satisfaction with the relationship with wife/partner, satisfaction with relationships with children, financial situation and help from relatives and family. The scale’s reliability, depending on the age group, was calculated to equal 0.81 to 0.83. An adaptation of the Mini-Cope questionnaire was provided to assess patient strategies of dispositional coping. A version by Oginska-Bulik and Hurczynski [2009] was used. The form included 28 statements assessing for 14 strategies of coping with stress. The half reliability of the questionnaire was 0.86. The internal consistency for most of the scales was assessed at a satisfactory level [16]. In order to examine styles of coping, an adaptation of the Coping Inventory for Stressful Situations (CISS) of Strelau et al. [ 17] was used. It consisted of 48 statements concerning stressful events and specific coping patterns used in specific situations. Three main coping styles were identified: task-focused, emotion-focused and avoidance-focused. The avoidance-focused style was divided into engaging in vicarious activities or seeking social contact. The survey has high accuracy and high internal consistency (0.78–0.90 in accordance with Cronbach’s alpha). Finally, a Polish version of the Rosenberg self-esteem scale adapted by Łaguna, Lachowicz-Tabaczek and Dzwonkowska was used. The scope of the scale was to measure the general level of patients’ self-esteem. The questionnaire included 10 statements. The reliability of the scale was found to vary depending on the age of the patient, ranging from 0.81 to 0.83 [18,19]. Statistical analysis was performed using IBM SPSS Statistics 25. Basic descriptive statistics analyses were calculated using the Kolmogorov–Smirnov (K-S) test, Student’s t-tests for independent samples, correlation analyses with Pearson’s r coefficient and a stepwise linear regression analysis. α 0.05 was considered significant; however, test statistical results of α equal to 0.05 < $p \leq 0.1$ were interpreted as significant statistical trends. ## 3.1. Demographic Data A total of 126 patients diagnosed with prostate cancer participated in the study. Due to missing/incomplete data, the number of responses to specific questions differed between the questionnaires, which is noted in the tables below. The youngest patients that participated in the study were 48, while the oldest were 82 years old. A total of 109 patients were married, and 30 were assessed to be in a good financial standing, choosing a 5 on a scale from 1 to 10, 1 being the lowest. Among the study population, 40 patients had a secondary education. Specific data are presented in the tables below (Table 1 and Table 2). ## 3.2. Analysis of Socio-Demographic Variables in the Inventory for Measuring Coping with Stress—Mini-COPE In the analysis, we have checked whether the number of children was related to the type of coping strategy. Multiple correlations were tested using Pearson’s r coefficient. As demonstrated in Table 3, three were statistically significant. The number of children was positively correlated with the strategies of using sense of humor, self-denial and self-blame. However, the strength of the reported relationships was low. As the next step, we have assessed whether relationship (marital) satisfaction was related to coping processes. A series of Spearman’s rho rank correlation analyses were performed. As shown in Table 3, one correlation was statistically significant, as relationship satisfaction correlated positively with emotional support strategy. The strength of this relationship was low. The other correlations were not statistically significant. We have also tried to determine if paternal relationship satisfaction was related to the strategy of coping with stress. Pearson’s r coefficient correlations were performed, however, all of them were statistically insignificant. Similar investigations were performed to assess the impact of financial situation and the choice of stress coping strategy. No statistically significant results were found. The influence of help of the relatives was also determined. Active coping strategies were more frequent among patients who received help from family members. This group of patients also had a lower tendency for psychoactive substance use and self-blame. ## 3.3. Stress Coping Style and Strategies, as well as Self-Esteem, Depend on the Age of the Respondents An analysis of the influence of patients’ age (under or over 65) on the type of stress coping style and self-esteem was performed using a series of moderation analyses with the Process macro. The association between task-focused style and self-esteem was significantly moderated by patients’ age. Based on the conditional effects, the association was found significant for patients aged <65 years ($B = 2.60$; SE = 0.74; $t = 3.51$; $$p \leq 0.001$$), but not significant for patients aged 65+ (B = −0.31; SE = 0.91; t = −0.34; $$p \leq 0.736$$). Similarly, a moderated mediation model was used to analyze patients’ age as a moderator of coping strategies and self-esteem. We found a statistically significant effect of age moderation on active coping strategy and self-esteem. The correlation between these variables was statistically significant in the group of patients <65 ($B = 2.46$; SE = 0.65; $t = 3.77$; $p \leq 0.001$), while the studied relationship was insignificant among patients aged 65+ ($B = 0.27$; SE = 0.72; $t = 0.37$; $$p \leq 0.710$$). There was also a statistically significant effect of age moderation on the relationship between the strategy of positive re-evaluation and self-esteem, with a statistically significant association for patients up to 65 years of age ($B = 2.24$; SE = 0.62; $t = 3.59$; $p \leq 0.001$), and an insignificant association for patients aged 65+ (B = −0.99; SE = 0.75; t = −1.31; $$p \leq 0.191$$). We have also found a significant effect of age moderation on seeking emotional support and self-esteem. Younger patients (<65 years old) with a lower tendency for choosing a strategy for seeking emotional support had lower self-esteem ($B = 2.02$; SE = 0.52; $t = 3.92$; $p \leq 0.001$). Among patients 65+, the relationship was not significant ($B = 0.14$; SE = 0.59; $t = 0.23$; $$p \leq 0.817$$). The associations with the types of individual dimensions of coping with stress were evaluated. We have found a statistically significant effect of age moderation on the relationship between active coping and patients’ self-esteem. The correlation was statistically significant among patients <65 years of age ($B = 3.17$; SE = 0.75; $t = 4.23$; $p \leq 0.001$), while the effect was insignificant for patients aged 65+ ($B = 0.15$; SE = 0.91; $t = 0.16$; $$p \leq 0.873$$). A similar association was found for the dimension of seeking support and self-esteem. The correlation was statistically significant for patients aged <65 years old ($B = 2.40$; SE = 0.60; $t = 3.98$; $p \leq 0.001$), however, it was found to be insignificant for patients aged 65+ ($B = 0.27$; SE = 0.69; $t = 0.40$; $$p \leq 0.692$$). A graphical presentation of all statistically significant correlations is presented in Figure 1, while the remaining insignificant correlations are presented in Appendix A. ## 4. Discussion In the case of cancer patients, an important aspect that should be taken into consideration is that the coping strategies do not change. Patients who tended to use specific methods of coping with difficult situations were likely to use identical strategies during cancer diagnosis and different stages of treatment [20,21]. In this study, we have assessed stress coping styles and strategies used by patients diagnosed with prostate cancer. We have also tried to evaluate how individual strategies impact patients’ self-esteem. Trying to determine how to support prostate cancer patients, we have used our database to analyze the influence of sociodemographic variables on coping strategies. A result worth noticing was the fact that patients who received support from family and relatives tended to use an adaptive strategy in the form of active coping. Relatives’ support also correlated negatively with the use of psychoactive substances and self-blame, which are considered maladaptive strategies. Despite the weak correlations, these data provide the basis towards further investigation. In our research, we have also examined the influence of adaptive stress coping strategy on patients’ self-esteem. A task-focused stress coping style was positively associated with patients’ self-esteem. Patients looking for information about their disease and actively cooperating with a doctor were characterized by higher self-esteem. On the other hand, the self-esteem was lower in patients using an emotion-based style. Similar findings were observed by Shakeri et al. [ 22], as cancer patients adopting an emotion-focused style of coping experienced reduced quality of life. This was due to the fact that, both at the time of diagnosis and in the later period, the accompanying emotions were usually negative. Emotions such as regret, anger and a sense of injustice negatively affect a patient’s mental sphere and may constitute a new source of stress. Social withdrawal and focus on subsequent stages of cancer treatment are a combination that may effectively increase patients’ positive self-esteem, allowing a view from a different, more positive perspective [22]. Studies have demonstrated that men use emotion-based strategies less frequently than women. This difference between male and female populations may be used at the beginning of cancer treatment, as, instead of concentrating on patients’ emotion suppression, the therapy can focus on subsequent treatment analysis and mobilization of patients’ personal resources [23]. Our data support the role of adaptive styles of coping with stress among prostate cancer patients. We have found patients using task-oriented coping strategy to have higher self-esteem. Our study has also demonstrated the non-adaptive style to influence patients’ self-esteem. Such patients tended to focus on their emotions as a coping method. Our results are consistent with previous studies assessing cancer patients. Among the studied styles, the strategy using avoidance was not significantly related to self-esteem. Multiple coping strategies were found to influence patients’ self-esteem both positively and negatively. The first strategy that significantly related to self-esteem was active coping. Patients who were in contact with a stressor and have undertaken active steps to reduce it, initiated specific actions directly and increased their efforts to fight the disease were found to have higher self-esteem. Similar results were obtained in a meta-analysis conducted by Roesh et al., [ 2005], showing a positive relationship between self-esteem and active coping strategies [24]. Another correlation that positively related to self-evaluation was a planning-based strategy. Having identified the difficult situation, action plan formulation and analysis of different strategies were found to reduce the associated stress. Patients using a planning strategy tend to analyze their resources against the source of stress, in this case, prostate cancer, indicating the secondary nature of the assessment. As a part of cancer treatment planning, patients can prepare for the upcoming treatment and its consequences, including the possible adverse effects of surgical prostate resection. The importance of patient preparation was noticed by Spendelow et al., [ 2017], who showed in their meta-analysis that the use of active coping strategies could reduce patients’ perception of both physical and psychological pain. The authors have also indicated that the timing of a patient’s recovery may be associated with specific strategies, and slower recovery was demonstrated among patients using non-adaptive strategies [25]. Our study has also shown a positive correlation between self-esteem and strategies for seeking instrumental and emotional support, which are based on seeking information and help. For a variety of reasons, patients may check for a second opinion to confirm the diagnosis and treatment options and/or to provide further guidance. Seeking emotional support is related to the need for help in the area of enduring the hardships of treatment and understanding of family and friends. Complications associated with prostate cancer treatment often include disturbance of physiological functions, not only related to urinary incontinence and nocturia, but also negatively affecting sex function, providing additional psychological burden [26]. Our results are consistent with the previous literature. Family support can significantly help for cancer patients and influence their treatment outcomes. A review conducted by Sukyati et al. has demonstrated that family support can even result in relapse prevention. Anxiety has a negative impact on health, lowering patients’ self-confidence, causing insomnia and lowering patients’ quality of life. Partners’ support can cause a greater control over the emotional sphere, which significantly reduces the level of anxiety. In addition, social support has been shown to contribute to a greater acceptance of the disease and a reduction of depressive symptoms. As the last part of the study, we have evaluated the moderating effect of age on coping strategies and self-esteem. We have found some significant correlations between patients’ age, their stress response and self-confidence. In patients up to 65 years old, the use of active adaptive stress coping strategies was found to correlate with higher self-esteem. However, the correlation was insignificant for older patients. The differences between the two age groups may be caused by different stages of evolutionary psychology. Patients in later adulthood are in the culmination stage of life, and their developmental tasks concentrate mostly on contribution to the well-being of future generations, while younger patients concentrate more on self-actualization and integration. Our research did not reveal sociodemographic differences between the strategies used and the support of parents, families and children, therefore, from the beginning of the study, we did not assume any subdivision of patients. A previous study by Matzka et al. [ 15] found no influence of social support on patients’ resilience index. Cancer diagnosis provides additional psychological burden, and even the use of adaptive strategies does not increase patients’ self-esteem. Patients in their midlife (middle adulthood) using adaptive strategies of stress coping in the form of acceptance, active coping, positive reappraisal and seeking instrumental and emotional support were found to have higher self-esteem. These findings are particularly important due to the role of self-esteem in anxiety and cancer-associated stress reduction. Patients with higher self-esteem tend to have a more positive attitude, regardless of the difficulties encountered [27]. Self-esteem and self-determination can be used as resources during patients’ cancer treatment. If higher, patients can have a sense of control and power over situations in their lives, reducing negative psychological implications of cancer diagnosis and treatment [24]. The results of a meta-analysis by Roesch et al. have shown prostate cancer patients using active coping to have lower levels of anxiety and depression and were consistent with Lzararusa and Folkman’s transactional model. Patients who tend to approach the disease as a challenge more often use strategies based on active coping [23]. There is limited research on the influence of age on cancer patients’ stress coping styles and strategies. However, our findings highlight the need for further studies and provide an important direction towards working with cancer patients. In our study, among the younger group of patients, we have noticed that a task-oriented stress coping style and the use of adaptive strategies correlate with higher self-esteem. This allowed us to select a population of patients potentially able to withstand the negative emotions associated with cancer and requiring the least psychological intervention. In our study, we have also identified a group of patients that should receive special attention during psychological interventions. Among older respondents, social withdrawal and lower physical activity were more common. Our research revealed the importance of spouse/partner support, as it correlated with longer patient survival. Psychological support should be provided as a routine procedure for cancer patients and can include various adaptive stress coping strategies. Services should also consider inclusion of patients’ partners in the psychological support program. The results of our study may be helpful for future clinical trials. During patient consultation, standardized questionnaires to assess strategies and styles of coping with stress and self-assessment can be used for psychological evaluation. For patients using non-adaptive coping strategies, during psychological interventions, it is important to be able to reformulate their thinking and coping strategies and to work with the patients in order to make them use adaptive forms of coping. Patients undergoing prostatectomy usually are discharged home two days post-surgical treatment, and it is often the last time they have contact with a psychologist. An introduction of an interactive clinic organized, e.g., by Canadian organization Movember, would allow patients to have better psychological care. Patients can also be provided with educational materials and online psychological consultations to ease patient–psychologist contact. The importance of psychosocial support was previously demonstrated and is supported by the Stanford Chronic Disease Self-Management Program (CDSMP) present in the United States since 2010. The objective of the program is to enhance patients’ self-efficacy to have more confidence in their ability to fight against the disease. As a part of the program, during a 6-week workshop, individuals learn self-management through adaptive problem solving, activity planning, medication and symptom management, physical activity and communication with healthcare professionals. Its effectiveness was demonstrated by Salvatore et al., who showed positive effects on quality of life and health-related outcomes of cancer survivors [28,29]. ## 5. Conclusions For the majority of patients, cancer diagnosis is a difficult and complex process. Patients diagnosed with a malignant disease present with various coping mechanisms related to their coping resources, economic situation, education and previous experience. The incidence of prostate cancer is rising, and, each year, more and more patients will face its diagnosis. Regardless of cancer staging, even for patients presenting with advanced forms of the disease, cancer diagnosis is usually shocking and followed by a range of strong emotions experienced by both patients and their families. An important aspect that should be complementary with cancer treatment is psychological help. The aim of our research was to identify stress coping styles and strategies used by prostate cancer patients. The results of this study are not only extremely important from the patients’ perspective, but also for the medical personnel involved in cancer treatment. Both adaptive and non-adaptive coping styles were found among patients diagnosed with prostate cancer. Given the relatively constant nature of coping styles, we can speak of specific models used by patients throughout the treatment. Our findings seem to be consistent with theoretical assumptions, as, in the case of personal coping resources and support of loved ones, patients tend to use a task-focused coping style, favoring higher self-esteem. On the other hand, a problem-based stress coping style was found to negatively influence patients’ self-esteem. We have also proved the use of adaptive stress coping strategies among prostate cancer patients to contribute to higher self-esteem. The study included two age group categories, differing in patients’ attitudes and approaches towards cancer diagnosis and treatment, showing that patients aged over 65 should receive special psychological care. ## 6. Limitations There are several limitations to this study. The study has focused on one cancer type only. Further research is needed to assess for any differences between coping styles and strategies used by different cancer patient populations, genders and cancer diagnoses. Another limitation was that we did not compare changes of patient stress coping strategies over time. The strengths of the study included evaluation of the importance of family support, which was found to influence the use of adaptive or non-adaptive coping strategies. The study sample was also limited and included only 126 prostate cancer patients. Further studies on greater populations should be performed in order to confirm the study results and provide further knowledge on the psychological aspects of prostate cancer diagnosis and treatment.
# Importance of Asprosin for Changes of M. Rectus Femoris Area during the Acute Phase of Medical Critical Illness: A Prospective Observational Study ## Abstract Asprosin, a new adipokine, is secreted by subcutaneous white adipose tissue and causes rapid glucose release. The skeletal muscle mass gradually diminishes with aging. The combination of decreased skeletal muscle mass and critical illness may cause poor clinical outcomes in critically ill older adults. To determine the relationship between the serum asprosin level, fat-free mass, and nutritional status of critically ill older adult patients, critically ill patients over the age of 65 receiving enteral nutrition via feeding tube were included in the study. The patients’ cross-sectional area of the rectus femoris (RF) of the lower extremity quadriceps muscle was evaluated by serial measurements. The mean age of the patients was 72 ± 6 years. The median (IQR) serum asprosin level was 31.8 (27.4–38.1) ng/mL on the first study day and 26.1 (23.4–32.3) ng/mL on the fourth study day. Serum asprosin level was high in $96\%$ of the patients on the first day, and it was high in $74\%$ on the fourth day after initiation of enteral feeding. The patients achieved 65.9 ± $34.1\%$ of the daily energy requirement for four study days. A significant moderate correlation between delta serum asprosin level and delta RF was found (Rho = −0.369, $$p \leq 0.013$$). In critically ill older adult patients, a significant negative correlation was determined between serum asprosin level with energy adequacy and lean muscle mass. ## 1. Introduction Per the results of the world population estimates revised in 2017, Europe faces exceptional demographic changes. Accordingly, individuals aged 60 and above already constitute $25\%$ of the population, and by 2050, this ratio is expected to reach $35\%$. The number of those with an age of 80 and above will be tripled by 2050 [1]. In developed countries, the increased average life expectancy has also resulted in increased demand for hospitalization of the elderly population in hospitals and intensive care units (ICU). It has been determined that approximately more than $50\%$ of the patients hospitalized in the intensive care unit are above 65 years old [2,3,4]. A systematic review demonstrated a significantly high malnutrition prevalence (38–$78\%$) in ICU patients. This situation is correlated to an increase in morbidity, mortality, and hospital-related costs for patients [5]. The dependency on mechanical ventilation is correlated with malnutrition, length of hospital stay, ICU readmission, infection rates, and risk of hospital death, making this a critical dilemma in ICU patients’ care. There are significant challenges in accurately estimating energy requirements and hence the optimal dosing of nutrition. Critical illness results in hypermetabolism and hypercatabolism, putting patients in the ICU at high risk of malnutrition. The metabolic and hormonal changes in critical illness result in muscle wasting and associated ICU-acquired weakness, which can persist for years [6]. Muscle atrophy can occur relatively early in critically ill patients in intensive care units. Muscle atrophy occurs with increased destruction and decreased muscle protein synthesis [7,8]. Inflammation, immobilization, endocrine stress responses, rapidly developing nutritional deficit, impaired microcirculation, and denervation are conditions that accelerate muscle atrophy [9,10]. Additionally, muscle loss is common in humans due to aging. Accordingly, muscle loss caused by aging may deepen in the presence of critical illness. Reversing skeletal muscle catabolism can prevent muscle atrophy during critical illness and improve functional outcomes [11,12,13]. Proinflammatory mediators are used as an indicator of muscle atrophy during critical illness [14]. Ultrasound is widely used in clinical practice, greatly contributing to diagnosis and management of many conditions. While systematic ultrasound examinations have been conducted mainly by sonographers in an examination room, there is now considerable interest in having physicians perform ultrasounds at the bedside, as part of regular medical examinations. Studies using portable ultrasounds have been spreading not only in the emergency room and intensive care unit (ICU) settings, but also in out-of-hospital situations in, for example, primary care and long-term care facilities (e.g., nursing homes). Additionally, muscle ultrasound is a suitable method for evaluating patients with muscle atrophy. The ultrasonographic evaluation of quadriceps’ muscle thickness effectively determines the effect of nutritional interventions on muscle loss in critically ill patients [7,8]. Adipose tissue functions as an endocrine organ with central energy storage that creates a diversity of bioactive mediators and adipokines (adipose-derived secreted factors), possessing proinflammatory or anti-inflammatory impacts. Adipokines may easily move into the systemic circulation and perform their effects through an inter-cell communication network (autocrine, paracrine, endocrine). Furthermore, they preserve regulating several aspects of the normal metabolic processes in the human body, such as glucose and lipid homeostasis, insulin sensitivity, and inflammatory response [15]. Asprosin is a novel glucogenic adipokine discovered in 2016, mainly secreted from white adipose tissue, and has a critical role in the regulation of hepatic glucose release, insulin secretion, appetite, and inflammatory response [16]. Moreover, it activates the PKCδ/SERCA2-mediated endoplasmic reticulum stress/inflammation pathways in skeletal muscle and promotes insulin resistance [17]. Insulin resistance has been revealed to be relatively higher in critically ill patients compared to healthy patients [10]. Evidence suggests an association between asprosin secreted levels and weight loss extent as a result of bariatric surgery, including sleeve gastrectomy or cholecystectomy. Two studies indicated a significant decrease in serum asprosin levels after six months of weight loss surgical intervention [18,19]. During fasting, the circulating serum asprosin level rises according to the glucose requirement and decreases with the start of feeding. Providing adequate nutritional support to critically ill patients has a critical role in the clinical prognosis of the patient [20,21]. Considering the above-mentioned information, the relationship between asprosin, muscle mass, and nutritional support in critically ill elderly patients is unclear. This study aimed to reveal the relationship between the serum level of asprosin, a new adipokine, the change in lean muscle mass in critically ill elderly patients, and the nutritional support given to the patients. ## 2.1. Study Design and Participants This study presents a prospective observational design developed in a tertiary care hospital’s clinical-internal intensive care unit from March to September 2022. All patients over the age of 65 who were expected to stay in the intensive care unit for at least 4 days and were administered enteral nutrition support within the first 48 h after their admission to the ICU were included. Patients who could be fed orally, who had previously been treated with parenteral therapy, and who had contraindications for enteral nutrition were excluded from the study. The study was approved by the local ethics committee (No: 586, date: 24 February 2022) and was conducted according to the Helsinki Declaration guidelines. Free and informed consent was obtained from the legal guardians of the study participants. ## 2.2. Data Collection Demographic data, ICU admission diagnostics, comorbidities, APACHE II (Acute Physiology and Chronic Health Evaluation) scores, SOFA (Sequential Organ Failure Assessment) scores, and the Charlson comorbidity index were recorded at admission. During the follow-up, the need for a mechanical ventilator, renal replacement requirement, and the number of days spent in the intensive care unit and hospital were recorded. The energy target of the patients was calculated as 25–30 kcal/kg/day according to ESPEN Recommendations [20]. The daily energy intake of the patients by tube who actually received enteral nutrition for four days from the start of enteral nutritional support was recorded. The percentage of patients reaching the target energy was calculated. No adjustments were made for age or BMI when calculating energy targets. The risk of malnutrition in patients was determined by the NRS-2002 score. The NRS-2002 form was filled in by the nurses on the day that the patients were admitted to the intensive care unit, taking information from the patients and their relatives, and recorded in the patient file. The nutritional risk of the patients was determined, and a nutrition plan was made. Patients with NRS-2002 ≥ 3 were considered to be at risk for malnutrition. ## 2.3. Serum Asprosin Measurement Here, 3 mL blood samples were collected in tubes, and the samples were centrifuged at 3000× g for 10 min at the 24th hour (first day) and fourth day after the start of enteral nutrition support. In our intensive care unit, feeding is interrupted at 11 am for all patients receiving enteral nutrition. Blood samples were drawn in the morning fasting before feeding was re-initiated. Then, 1 mL of serum supernatant was removed and collected in an Eppendorf tube. Serum samples were kept frozen at −80 °C. Serum asprosin protein concentrations were analyzed using the ELISA method (Cat. No. E4095HU). Delta (Δ) asprosin was calculated as the difference between the first and fourth day asprosin levels of the patients. The normal asprosin level was considered as <23.6 ng/mL (according to the reference range (kit used) determined by the Bioassay Technology Laboratory). ## 2.4. Ultrasonographic Assessment Ultrasound measurements were performed at 24 h (day 1) and on day 4 after the start of enteral nutritional support. Philips ClearVue 550 system with a linear ultrasound probe (4–12 Mhz) was used for calculation while these measurements were in the supine position on the surface in B mode. The area of the rectus femoris (RF) muscle of the lower extremity quadriceps muscle was measured. The sensor was perpendicular to the thigh axis, and the point is located at $\frac{2}{3}$ of the distance from the anterior superior iliac spine to the upper border of the patella. All ultrasonography (USG) measurements were performed by an intensive care specialist with five years of USG experience. Delta (Δ) RF was calculated as the difference between the RF area of the patients on the first and fourth days. ## 2.5. Statistics Statistical analysis was performed using the IBM SPSS statistics program version 22 (IBM, New York, NY, USA). The normality distributions of continuous variables were examined using the Shapiro–Wilk test. According to the normal distribution, continuous variables were presented as mean ± SD or median (interquartile range). Categorical variables were shown as numbers (%, percentage). The correlation between the data was investigated using Spearman’s correlation test. The correlation coefficient was accepted as 0–0.29 (weak), 0.30–0.69 (moderate), and 0.70–1.0 (strong). A value of $p \leq 0.05$ was considered statistically significant for all analyses. The study sample size was calculated as 42 patients, with a medium effect size according to the baseline asprosin level ($d = 0.5$), $80\%$ strength, and $5\%$ error probability using G-Power 3.1 software. ## 3. Results Two hundred and fifty-one patients hospitalized in the intensive care unit were evaluated. Of these patients hospitalized in the intensive care unit, 94 were under the age of 65, and 94 did not receive enteral nutrition (26 who received oral nutrition, 68 who received parenteral nutrition) were excluded. First, 67 patients were included in the study. However, 12 patients died during the study period, and 7 patients were excluded since their serum blood was hemolyzed. Two patients were excluded from the study because they switched from enteral to oral feeding. As a result, a total of 46 patients were included in the study (Figure 1). The mean age of the patients was 72 ± 6 years, and the median value of males (IQR) was 25 (54.3). The median (IQR) BMI of the patients was 22.0 (20.9–29.0), the mean APACHE II was 19.8 ± 6.98, and the median (IQR) Charlson comorbidity index was 6 (4–8). Metabolic disorders (n: 17, $37.0\%$) and sepsis/septic shock ($26.1\%$) were the most common reasons for hospitalization in the intensive care unit. Diabetes mellitus was present in 16 ($34.8\%$) of our patients, and at the same time, all the patients had received insulin therapy. The most common comorbidity in our patients was hypertension, in 29 patients ($63.0\%$). The malnutrition risk was found in 32 patients ($70.0\%$). The median percentage of reaching the daily energy requirement was 65.9 ± 34.1 during the 4-day follow-up. The daily energy requirements and the amount they can actually take are presented in Table 1. The mean daily protein intake was 0.4 ± 0.27 g/kg/day on the first day and 0.8 ± 0.47 g/kg/day on the fourth day. Mechanical ventilation (21 ($45.7\%$)) and renal replacement requirement (21 ($45.7\%$)) of the patients were quite high. Demographic data and clinical characteristics of the patients are presented in Table 1. Median (IQR) asprosin levels were 31.8 (27.4–38.1) ng/mL on the first day and 26.1 (23.4–32.3) ng/mL on the fourth day. The serum asprosin concentration of study participants significantly decreased ($p \leq 0.001$), and the delta asprosin value was −5.77 (−9.22 to 0.28) (Table 2). The asprosin level was high in $95.7\%$ of the patients on the first day. This rate decreased on the fourth day of the study, and $73.9\%$ of patients had high asprosin levels (Figure 2). Median RF was 1.68 (1.35–2.07) cm2 on the first day and 1.82 (1.38–2.01) cm2 on the fourth day ($$p \leq 0.196$$). The median delta RF was 0.15 (−0.43 to 0.46). The laboratory values of the patients on the first and fourth days are presented in detail in Table 3. The glucose level of the patients was 143 (110–194) mg/dL on the first day, 125 (103–170) mg/dL higher than on the fourth day ($p \leq 0.001$). The albumin value was statistically significantly lower on the fourth day than on the first day of the study (2.7 (2.4–3.1) g/L vs. 2.5 (2.2–2.9) g/L, $$p \leq 0.001$$). A significant negative correlation was observed between the delta asprosin level and the delta RF value of the patients (Rho = −0.369, $$p \leq 0.013$$) (Figure 3). There was a moderate correlation between the serum asprosin level of the patients and the received % of the daily energy target (Rho = 0.345, $$p \leq 0.027$$) (Figure 4). The correlations between the serum asprosin value and the severity of illness and biochemical parameters of patients on both study time points are summarized in Table 4. A negative correlation was determined between albumin and prealbumin levels and the first day and delta asprosin levels ($p \leq 0.05$). ## 4. Discussion To the best of our knowledge, this prospective study is the first to investigate the serum asprosin value and its relationship between muscle mass and nutritional adequacy in critically ill older adult patients. Most of the participants had increased serum asprosin levels upon study admission. On the fourth day after enteral nutrition support initiation, the serum asprosin concentration of the study sample significantly decreased compared to the first day of the study. There was a significantly negative correlation between the delta asprosin value and the delta RF of patients. Besides, the delta asprosin value was significantly correlated with the received percentage of energy intake from daily energy requirements. Almost all patients had elevated asprosin levels on the first day of the study. A significant decrease in asprosin levels was observed in our patients after four days of enteral nutrition. We think that the reason for the high first-day asprosin level in patients is malnutrition in adult patients and/or high insulin resistance developing in critical illness. The mean age of our patients was high, and $70.0\%$ were at risk of malnutrition in our study. The main concern in the elderly, especially the very elderly and those with multiple comorbidities, is reduced food intake and weight loss. Malnutrition in elderly patients delays recovery in both acute and chronic diseases and increases morbidity and mortality [22,23]. In response to starvation with a low-intake diet, asprosin is released from white adipose tissue and transported to the liver to mediate glucose release into the bloodstream. Additionally, asprosin is abundantly expressed in human skeletal muscle-derived mesoangioblasts, suggesting that the musculoskeletal system may play a role in regulating asprosin expression [24]. In a cross-sectional study by Hu et al., 46 patients with anorexia nervosa were included. It was found that these patients had a statistically significant increased plasma asprosin level compared to healthy controls [25]. Providing adequate nutritional support in critically elderly patients may be a key method in optimizing increased asprosin levels. It was shown that insulin resistance in intensive care patients is considerably higher than in healthy patients [10]. In a cross-sectional study conducted by Goodarzi et al., it was reported that the serum asprosin level was statistically significantly positively correlated with Hba1c, HOMA-IR, and insulin levels in patients with a type 2 diabetes mellitus diagnosis and nephropathy [26]. Similarly, the first-day glucose values of our patients were higher than the glucose values after four days of feeding. Factors including systemic inflammation, decreased peripheral blood flow, inactivity, insulin resistance, and decreased food intake might cause significant reductions in muscle mass in severely ill patients hospitalized in intensive care units. Malnutrition, depending on the negative nutritional balance between what is necessary for the patients and what they receive, is reliant on decreased muscle mass and functionality, which is considered common among ICU patients. Thus, the correct nutritional diagnosis of these patients is critical to support adequate dietary maintenance. Nevertheless, nutritional evaluation is challenging in intensive care units, particularly when monitoring nutritional status. Ultrasonography is a portable, non-invasive bedside method that may specify and measure skeletal muscle and has been used as a supportive examination tool to provide nutritional diagnostics. The ability to detect short-term changes by allowing serial measurements is one of the most advantageous aspects of ultrasonography compared to other anthropometric measurement instruments. The ultrasound examination of rectus femoris muscle thickness has been reported to be used in the monitoring of nutrition [27,28]. In the study of Duerrschmid et al., which experimented with mice, wild-type mice and mice with truncated mutations in the FBN1 gene were provided a high-calorie, high-fat diet. Mice with truncated mutations in the FBN1 gene had less fat and muscle content than wild-type mice. This study demonstrates that asprosin, encoded by FBN1, has an impact on nutrition and muscle mass [29]. One of our hypotheses in our study was that insulin resistance, being very common in intensive care units, can be improved as adequate nutrition is provided, and the increase in the asprosin level may be effective in this condition. Since we did not measure insulin resistance, we cannot clarify this. This muscle wasting also adversely affects the clinical outcomes of the patients. Quadriceps’ muscle thickness is used to evaluate nutritional interventions in critically ill patients in the intensive care unit [30]. The present study demonstrated a negative correlation between the change in quadriceps’ muscle thickness and asprosin after four days of nutrition. In the current literature, it has been reported that the musculoskeletal system effectively regulates the level of asprosin [24,31]. Du et al. conducted a cross-sectional study of 120 cancer patients. A statistically significant positive correlation was found between the serum asprosin level and body fat mass in these patients [32]. Moreover, increased levels of asprosin may accelerate the reduction in muscle mass in critically ill elderly patients. In intensive care units, giving sufficient nutritional substances and using them in the anabolic process positively contributes to the course of the disease. We determined a negative correlation between the serum asprosin level of our patients and the percentage of patients reaching the target energy. The present study concluded a negative correlation between prealbumin and albumin with the asprosin value on the first day. Prealbumin and albumin values are frequently used as biomarkers of adequate nutrition. However, serum albumin and prealbumin levels are affected by many factors [33,34]. Biochemical measures are beneficial to obtain in the ICU setting. Nonetheless, improvements in such parameters are not consistently related to improvements in outcomes when controlled for illness severity. There may be several reasons for these limitations [35]. The significant fluid shifts in critical illness can impact the serum concentrations of the most commonly used biochemical indicators. Visceral or “hepatic” proteins, including albumin and prealbumin, are affected by the acute phase response, independent of nutrition status or nutrition input [36]. For example, the prealbumin level usually falls at the beginning of the ICU admission even when nutrition support has been entirely implemented, and the level may improve as the acute phase response decreases, even if the patient has not received adequate nutrition or has continued to lose weight. However, several studies have suggested that if the acute phase response is reasonably stable, the prealbumin levels then correlate with nutrition intake, but perhaps not the outcome. Prealbumin levels cannot be measured at all hospital laboratories [37,38]. Nonetheless, the serum albumin level is routinely measured in most hospitals and is a robust prognostic indicator even in critical illness, but it has a long half-life and does not correlate to significant alterations in nutrition input, making it less useful as a parameter for sequential monitoring of nutrition progress [39]. Malnutrition is a very common and vital issue in intensive care. There are no good markers to assess rapidly developing muscle wasting in patients with fractures. The prealbumin and albumin we used in our routine are affected by many parameters. Our study suggests that asprosin, a new adipokine, can be used to monitor adequate nutrition and muscle loss. However, a larger number of patients and further studies are needed. ## 5. Limitations The limitations of our study are that it is single-centered, and the number of patients is small. Our study findings included only elderly critically ill patients. This limits generalizability in critically ill patients. If we had also evaluated insulin resistance in our patients, we could better explain the pathophysiology. Another limitation of our study was the inability to measure inflammatory and anti-inflammatory cytokines in patients due to cost. If we could measure the cytokine values, we could see the effect of inflammation on nutrition and asprosin more clearly. Evaluation with ultrasonography for a longer time would have yielded more precise results to better evaluate the response of muscle mass in response to feeding in patients. The mitochondrial evaluation was not performed for the asprosin level in our study. More reliable results could have been obtained with this evaluation. ## 6. Conclusions Nearly all the critically ill elderly patients had elevated serum asprosin levels. Serum asprosin levels decreased in those patients who received enteral nutritional support and ICU treatment. In this study, there was a negative correlation between the serum asprosin level and delta lean muscle mass. Additionally, the serum asprosin level correlated with nutritional adequacy. For future investigations, whether the serum asprosin level can be used as a biomarker in evaluating the adequacy of nutritional interventions in critically ill patients should be evaluated with larger sample sizes. The relationship between the asprosin level and ICU-acquired weakness should be clarified.
# Multiple Organic Contaminants Determination Including Multiclass of Pesticides, Polychlorinated Biphenyls, and Brominated Flame Retardants in Portuguese Kiwano Fruits by Gas Chromatography ## Abstract Global production of exotic fruits has been growing steadily over the past decade and expanded beyond the originating countries. The consumption of exotic and new fruits, such as kiwano, has increased due to their beneficial properties for human health. However, these fruits are scarcely studied in terms of chemical safety. As there are no studies on the presence of multiple contaminants in kiwano, an optimized analytical method based on the QuEChERS for the evaluation of 30 multiple contaminants (18 pesticides, 5 polychlorinated biphenyls (PCB), 7 brominated flame retardants) was developed and validated. Under the optimal conditions, satisfactory extraction efficiency was obtained with recoveries ranging from $90\%$ to $122\%$, excellent sensitivity, with a quantification limit in the range of 0.6 to 7.4 µg kg−1, and good linearity ranging from 0.991 to 0.999. The relative standard deviation for precision studies was less than $15\%$. The assessment of the matrix effects showed enhancement for all the target compounds. The developed method was validated by analyzing samples collected from Douro Region. PCB 101 was found in trace concentration (5.1 µg kg−1). The study highlights the relevance of including other organic contaminants in monitoring studies in food samples in addition to pesticides. ## 1. Introduction The consumers’ interest in new and exotic fruits has intensified, mainly due to the growing knowledge regarding their bioactive composition and biological activities with pro-healthy effects. Kiwano (*Cucumis metuliferus* E. Mey), belonging to the Cucurbitaceae family, is a plant naturally occurring in South Africa, Nigeria, Namibia, Botswana, and Southern Sahara, being also sporadically found in Yemen [1]. In the last years, its exportation has grown in countries such as Kenya, New Zealand, France, and Portugal [1,2]. The ripe kiwano fruit is characterized by an orange skin with many blunt thorns on its surface and green, jelly flesh inside [1,2,3,4]. Kiwano fruit has low levels of carbohydrates and calories but high contents of water, minerals including magnesium, calcium, potassium, iron, phosphorus, zinc, copper, and complex B vitamins, vitamin C, and β-carotene [1,2]. Some pharmacological properties of this exotic fruit have been recently revised by Vieira et al. [ 3], including anticardiovascular, antidiabetic, antiulcer, antioxidant, anti-inflammatory, antimalarial, and antiviral activities. Due to these beneficial properties, its production, exportation, and consumption have increased, leading to intensive cultivation. As such, these particular fruits contribute directly and importantly to food security and nutrition in most producing zones, however, some food safety issues are still little explored in these matrices. There are several ways of improving plant cultivation. One of them is the use of plant protection products, commonly known as pesticides, which may have a chemical source as well as a natural origin [5]. Pesticides are used to protect crops from the harmful activity of other plants, microorganisms, insects, or even animals [6]. Although higher yields of cultivation can be obtained by using pesticides [7], they represent a threat to animals and human health and lives. Other toxic chemical substances that are present in the environment due to man-made activity derived from different sources (e.g., plastics, industrial, etc.), are referred to as environmental pollutants (e.g., polychlorinated biphenyls (PCB), polybrominated diphenyl ethers (PBDE), polycyclic aromatic hydrocarbons (PAH), heavy metals). Many of these compounds can be resistant to environmental degradation and accumulate in soil and food [8]. Further, prolonged exposure to these agricultural chemicals, particularly by contaminated food consumption, may lead to chronic disorders, such as cancer, hormone disruption, diabetes, asthma, or infertility [9,10,11] and neurodegenerative disorders [12]. As an example of the pesticide family, organophosphorus pesticides (OPP) are highly toxic chemical compounds used as insecticides for crop protection [13]. These chemicals are neurotoxic, as they inhibit acetylcholinesterase (AChE), which causes malfunctions in muscular activity leading to seizures, paralysis, or even death [14]. Further, persistent organic pollutants (POP), including organochlorine pesticides (OCP), PCB, PBDE, and PAH, are organic lipophilic chemicals that bioaccumulate in fatty tissues, also causing adverse effects on human health and the environment [15,16]. Exposure to POP is associated with malfunctions in the reproductive and endocrine systems [17], being also responsible for the development of many cancer types. Apart from human health, the use of pesticides is deleterious to the environment. Because of this, many flora and fauna species are exposed to multiple contaminants. Water, soil, and air pollution caused by the use of chemicals leads to disturbances in the ecosystem and poses a threat to biodiversity [5,18]. Therefore, their use must be restricted [19]. Due to the toxicity of environmental pollutants, their content needs to be continuously monitored, and attention to them is crucial. Besides that, surveys of pesticide residues in fruit are important to validate conformity with strict regulations of newly open markets for the exportation of exotic fruit. The European Commission establishes the maximum residue levels (MRLs) for pesticides to minimize the exposure of humans to harmful levels in food or feed [20]. Pesticides and several environmental pollutants have been reported in the literature on food [21,22,23,24,25,26,27,28]. However, there is a lack of studies regarding new fruits that are not yet legislated even though there is a high demand, and environmental contaminants are also not legislated [29,30,31]. Even more, one of the ambitious goals set by the European Green Deal and the Farm to Fork Strategy includes a $50\%$ reduction in the use of pesticides by 2030. This strikes a challenge to analytical chemistry, namely in the development and validation of sensitive analytical methods. One of the best approaches for multiresidue analysis (simultaneously pesticide and other contaminants) in food samples is the extraction by Quick, Easy, Cheap, Effective, Rugged, and Safe (QuEChERS) method [32]. It is a very convenient, time- and reagent-saving solid-phase extraction-based procedure consisting of two major steps [33]. In the first step, the fruit, vegetable, or other food sample is subjected to extraction with acetonitrile (MeCN) and salts (e.g., MgSO4, NaCl), followed by a second step in which a sample clean-up via dispersive solid-phase extraction (d-SPE) is performed [20]. Afterward, the extracted and purified compounds are commonly analyzed with the use of gas chromatography (GC)-based methods [34]. Particularly, GC coupled with a mass spectrometer (MS) is favored for such a complex multiple contaminants identification due to the low limits of detection (LOD) [35]. Tandem mass spectrometry, specifically GC-MS/MS and LC-MS/MS, and other selective detectors were reported to be more efficient in simultaneously detecting multiple contaminants [36]. Considering the beneficial properties associated with the kiwano and its increasing consumption, it becomes urgent to develop methodologies and evaluate this fruit’s safety [37]. To the best of our knowledge, there are no analytical methods developed or monitoring studies that report the chemical safety in terms of pesticides and other environmental contaminants, namely plastic-related chemicals and others associated with anthropogenic sources, in kiwano fruit samples. Therefore, the aim of this study was to optimize and validate an extraction methodology for the simultaneous analysis of 30 multiple contaminants (6 OPP, 12 OCP, 5 PCB, and 7 BFR) from kiwano fruit samples using QuEChERS method and d-SPE clean-up to detect trace levels of these contaminants using GC techniques. ## 2.1. Reagents and Standards Analytical standards of high purity (≥$97\%$) for seven brominated flame retardant (BFR) compounds (2,4,4′-tribromodiphenyl ether (BDE28), 2,2′,4,4′-tetrabromodiphenyl ether (BDE47), 2,2′,4,4′,5-pentabromodiphenyl ether (BDE99), 2,2′,4,4′,6-pentabromodiphenyl ether (BDE100), 2,2′,4,4′,5,5′-hexabromodiphenyl ether (BDE153), 2,2′,4,4′,5,6′-hexabromobiphenyl ether (BDE154), and 2,2′,4,4′,5,5′-hexabromodiphenyl ether (BDE183)) were obtained from Isostandards Material, S.L. (Madrid, Spain). The five PCB standards (2,4,4′-trichlorobiphenyl (PCB28), 2,2′,4,5,5′-pentachlorobiphenyl (PCB101), 2,3′,4,4′,5-pentachlorobiphenyl (PCB118), 2,2′,4,4′,5,5′hexachlorobiphenyl (PCB153), and 2,2′,3,4,4′,5,5′-heptachlorobiphenyl (PCB180)) were acquired from Riedel-de Haën (Seelze, Germany). The eighteen pesticides with analytical grade (12 OCP (hexachlorobenzene (HCB), α-, β-, and ζ-hexachlorocyclohexane (HCH), [1,1,1-trichloro-2-(2-chlorophenyl)-2-(4-chlorophenyl)ethane] (o,p′-DDT), 2,2-bis(4chlorophenyl)-1,1-dichloroethylene (p,p′-DDE), 1-chloro-4-[2,2dichloro-1-(4-chlorophenyl)ethyl]benzene (p,p′-DDD), aldrin, dieldrin, α-endosulfan, methoxychlor, and lindane) and 6 OPP (chlorfenvinphos, chlorpyrifos, chlorpyrifos-methyl, dimethoate, parathion-methyl, and malathion) were obtained from Sigma-Aldrich (St. Louis, MO, USA). The internal standards (IS) 4,4′-dichlorobenzophone and triphenyl phosphate were from Sigma-Aldrich (St. Louis, MO, USA). QuEChERS extraction kits, clean-ups, and SampliQ GCB (Graphitized carbon black) SPE Bulk Sorbent were from Agilent Technologies (Santa Clara, CA, USA). Chromatography grade n-hexane and acetonitrile (MeCN) were purchased from Merck (Darmstadt, Germany) and Carlo Erba (Val de Reuli, France), respectively. Ultrapure water (UPW) with water sensitivity >18.2 MΩ⋅cm at 25 °C was produced with a Milli-Q water purification system (Millipore, MA, USA). ## 2.2. Samples Ten kiwano fruits were supplied by a local farm located at Cinfães, Douro, Portugal. The mature fruits were collected in February 2019 from 10 different plants (random sampling) to obtain a representative set of fruits. The pulp of kiwano was separated from the orange skin, ground in a miller, homogenized, and finally, stored at −18 °C. ## 2.3. Extraction Procedure: Optimization and Validation The 30 multiple contaminants were extracted from the kiwano samples based on the previously reported QuEChERS method with d-SPE clean-up [22]. The procedure, whose schematic illustration is shown in Figure 1, included five steps: [1] 5 g of kiwano pulp sample was weighed into a 50 mL polypropylene tube, [2] 8 mL of MeCN and 2 mL of UPW were added, and the tube was thoroughly vortexed for 1 min, EN QuEChERS (4 g MgSO4, 1 g NaCl, 1 g NaCitrate, 0.5 g disodium citrate sesquihydrate) were added, the tubes were shaken for 1 min with a vortex, and centrifuged for 5 min at 2490 rcf at room temperature, [3] 1 mL of the supernatant was transferred to the 2 mL d-SPE clean-up tube (150 mg of MgSO4, 50 mg of PSA, and 25 mg of GCB) and the tubes were vortexed for 1 min and centrifuged for 5 min at 2490 rcf at room temperature, [4] 900 µL of the final extract was transferred to a labelled vial, the extract was dried under nitrogen flow, and it was redissolved in 900 µL of n-hexane, and finally, [5] the sample was vortexed and 150 µL of the extract with the addition of 100 µg L−1 of the IS was added in the vial and was placed in the autosampler for the gas chromatography (GC) analysis. The IS was used to control the analytical quality of the GC analysis. Extractions were performed in triplicate. For the optimization of the methodology, pre-spiking and post-spiking experiments were carried out to evaluate the extraction efficiency. The procedure for pre-spiking was the same as described above (Figure 1), with the difference that the sample in step 1 was contaminated with 7.5 µg kg−1 from the mixture of 30 multiple contaminants. The following steps remained the same, as shown in Figure 1. The procedure for the post-spiking had a change in step 4. Before injection in the GC, 7.5 µg kg−1 of the 30 multiple contaminants was added to the vial and redissolved in the kiwano fruit extract. The extraction efficiency was studied in terms of recoveries percentages comparing the results obtained between the pre-spiking and post-spiking studies. The validation of the method developed was performed following the Eurachem guidelines and SANTE/$\frac{11312}{2021}$ document by studying several analytical parameters, such as the linearity, recovery at three spiking levels (7.5, 11.2, 14.9 µg kg−1) and 5 replicates matrix effects, and intra-day and inter-day precision (experiments with the 7.5 µg kg−1 spiking level by five repeated measurements in the same and intercalary days). Quantification was performed using matrix-matched calibration (linearity between 1.5–18.7 µg kg−1) and solvent calibration (linearity between 10–125 µg L−1). The analytical validation was performed in the GC coupled to an electron capture detector (GC-ECD) and GC coupled to a flame photometric detector (GC-FPD), and with the regression analysis, the linearity was evaluated, and the limits of detection and quantification (LOD and LOQ) were determined. ## 2.4. Equipment The GC analysis was performed according to Dorosh et al. [ 22]. Briefly, the halogenated organic compounds (5 PCB, 7 BFR, and 12 OCP) were analysed using GC-ECD (GC-2010, Shimadzu, Quioto, Japan) and OPP using a GC -FPD (GC-2010, Shimadzu, Quioto, Japan). The presence of contaminants was confirmed by GC/MS. Confirmation was based on a comparison of sample GC retention time and product ion abundance ratios (mass to charge ratio, m/z) against those obtained for a reference standard. The system control and the data acquisition were performed in Shimadzu’s GC Solution software in GC-ECD and GC-FPD and Xcalibur software in GC/MS. The GC analysis was performed in triplicate. ## 2.4.1. GC-ECD The analysis was performed using a capillary GC column Zebron-5MS (30 m × 0.25 mm × 0.25 μm) (Phenomenex, Madrid, Spain). The oven temperature was programmed at 40 °C for 1 min, increased to 120 °C at a rate of 15 °C/min where it was kept for 1 min. Then, the temperature was increased once more at a rate of 10 °C/min to 200 °C, where it was kept for 1 min, and lastly, the temperature was increased from 7 °C/ min to 290 °C and held for 10 min. The injection was performed in splitless mode. The temperatures of the injector and ECD were 250 °C and 300 °C, respectively. Helium was used as a carrier gas (1.3 mL/min), and nitrogen as a makeup gas (30 mL/min). ## 2.4.2. GC-FPD The GC-FPD column was the same as the one described in Section 2.4.1. The carrier gas was helium at 1 mL/min with a linear velocity of 25.4 cm s−1. The detector was at 250 °C in injection was performed in splitless mode, and the analytes were detected at 290 °C. The column was programmed at 100 °C, which was kept for 1 min before increasing it to 150 °C at a rate of 20 °C/min, where it was held for 1 min. Following, the temperature was increased to 180 °C at 2 °C/min and kept for 2 min, and finally, increased at 20 °C/min to 270 °C, where it was kept for 1 min. ## 2.4.3. GC/MS Analysis According to SANTE guidelines, confirmation of samples should be performed by MS detector. GC/MS analysis was performed with similar conditions of GC-ECD only in the positive samples observed in GC-ECD in order to have confirmation. GC/MS instrument, TRACE GC Ultra (Thermo Fisher Scientific, Austin, TX, USA) gas chromatograph coupled with a Polaris Q ion trap mass spectrometer was used. The transfer line and the ion source temperature were 260 and 270 °C, respectively. Data acquisition was performed first in full scanning mode from 50 to 500 m/z to confirm the retention times of the analytes. All standards and sample extracts were analyzed in selective ion monitoring (SIM) mode. PCB101 confirmation was performed with the identification of three m/z ions 326 > 324 > 286. ## 2.5. Statistical Analysis Two-way ANOVA statistical analysis was applied to estimate significant differences among different analytical procedures using GraphPad software. Multiple comparisons were performed where each mean value was compared to each group of contaminants. ## 3. Results and Discussion The extraction and clean-up steps for kiwano’ matrices were a challenging part of the method development due to its rich composition in carotenoids, steroids, alkaloids, saponins, glycosides, flavonoids, tannins, and phenolic compounds [1,3]. The optimization of analytical methods for the determination of 30 contaminants in kiwano samples included the two crucial steps of the QuEChERS procedure: [1] Sample extraction and [2] the d-SPE clean-up. Figure 2 shows the chromatogram obtained when the mixture of the 30 multiple contaminants was analyzed by GC-ECD and FPD in the method described previously in Section 2.4.1.1 and Section 2.4.2. The extraction recovery of the method was evaluated by spiking the kiwano sample with the multiple contaminant solutions at 7.5 µg kg−1. Four protocols were tested: [1] QuEChERS AOAC with additional d-SPE clean-up CL1 (150 mg of MgSO4, 50 mg of PSA, and 50 mg of GCB), [2] QuEChERS AOAC with additional d-SPE clean-up CL2 (150 mg of MgSO4, 50 mg of PSA, and 25 mg of GCB), [3] QuEChERS EN with additional d-SPE clean-up CL1, and [4] QuEChERS EN with additional d-SPE clean-up CL2. The study of the evaluation of the method’s efficiency was carried out according to the guidelines of the SANTE document [38], being the range of recovery established 70 to $120\%$. In Figure 3, poor extraction recoveries were observed for some of the chemical families using QuEChERS AOAC. The OCP, PCB, and BFR compounds presented recoveries of less than $70\%$ using the QuEChERS AOAC and CL1, while for QuEChERS AOAC and CL2 only the PCB compounds. Since recovery percentages after the clean-up CL1 (150 mg of MgSO4, 50 mg of PSA, and 50 mg of GCB) for QuEChERS AOAC evaluation were not satisfactory, the approach testing test other QuEChERS contents (EN) and another d-SPE clean-up (CL2) was followed. After reducing GCB in the CL2 clean-up and using QuEChERS EN, an improvement in extraction recoveries for all targeted multiple compounds was stated. The most evident result on extraction efficiency is the negative influence of the amount of GCB used in the second step of the extraction. As previously reported, GCB adsorbs compounds such as pigments, anthocyanins, and carotenoids, as well as planar compounds [23,33]. Therefore, reducing its quantity in the cleaning step is one of the optimizations of this process. Although the lower amount of GCB did not absorb all the coloring compounds like the previous CL1 clean-up, the samples were still suitable for GC analysis. ANOVA statistical analysis was used to compare the mean recoveries of each cleaning test (CL1, CL2) between the target chemical groups (OCP, OPP, PCB, BFR). The two-way ANOVA statistical study showed that the recoveries are significantly different comparing the two different clean-up sets (CL1 and CL2) for OCP and BFR using QuEChERS AOAC while for QuEChERS EN all chemical groups were statistically different. Overall, the results showed that most of the compounds are in the 70–$120\%$ range when QuEChERS EN and CL2 are used. Figure 4 shows a summary of the results of the recovery studies. It was observed that in the satisfactory range 70–$120\%$, the highest number of contaminants was achieved with QuEChERS EN and CL2. As previously reported, a detailed optimization is an extremely important step as it reveals which compounds show the best results. As reported by Fernandes et al. [ 22,23,24,35], this extraction method is suitable but needs to be optimized and studied for each group of compounds and matrices. The results, displayed in Figure 3 and Figure 4, allowed us to assess that the best extraction and cleaning procedures for kiwano were QuEChERS EN with a clean-up CL2 (150 mg of MgSO4, 50 mg of PSA, and 25 mg of GCB), and this was selected for all further investigations. ## 3.1. Matrix Effects In the present work, the matrix effect was evaluated by comparing the slope obtained with the calibration curves of each compound in the matrix phase and n-hexane. This evaluation was complemented by comparing the retention times of the chromatograms with the same concentration in the matrix phase and n-hexane, and no significant differences were observed. It is well described in the literature that some analytes in fruit extracts exhibit a matrix signal enhancement/suppression effect when analyzed by GC [23,39]. This effect occurs when interferences from fruit matrices (such as pigments, lipids, acids, etc.) compete with the target analytes in the GC injector [40]. Figure 5 shows that the different chemical families (OCP, OPP, PCB, and BFR) analyzed in kiwano fruits presented different matrix effects behaviors. The signal enhancement was observed with the use of both QuEChERS AOAC and EN with the CL2 cleaning step. Additionally, with QuEChERS AOAC and CL2 clean-up, the mean matrix factor value was higher than 1.2 in all the chemical families. The BFR are those with the highest signal increase. The QuEChERS EN showed a satisfactory matrix factor with CL1 clean-up. However, as shown in Section 3, the extraction efficiency was not acceptable with this extraction procedure. In any case, this study confirmed that the matrix effect was more evident when the lowest amount of GCB sorbent was used. ## 3.2. Method Validation Method validation is an important requirement in the practice of an analytical method process. The reliability and robustness of the method to be used for real sample analysis should be studied considering several analytical parameters. Linearity, extraction recovery at three spiking levels (7.5, 11.2, 14.9 µg kg−1), precision, LODs and LOQs obtained by the regression analysis (based on the standard deviation of the response of the curve and the slope of the calibration curve), as well as matrix effects, were the parameters studied for the validation of analysis of multiple contaminants in kiwano samples. Table 1 summarizes the analytical parameters in order of retention time obtained by GC-ECD and GC-FPD. Considering the matrix effects described in the previous section, the analytical validation process was carried out in kiwano extract. Matrix-matched calibration curves were obtained in kiwano extracts of the 30 target analytes with a coefficient of determinations greater than 0.991. LODs and LOQs ranged from 0.2 to 2.2 and 0.6 to 7.4 µg kg−1, respectively (Table 1). The mean recoveries at the three spiking levels of 7.5, 11.2, and 14.9 µg kg−1 ranged from $90\%$ and $122\%$ ($99\%$ on average) with relative standard deviation (RSD) values between $8\%$ and $15\%$. The method precision was determined through intra-day and inter-day repeatability experiments by five repeated measurements, and the results were less than $15\%$ of RSD, which is suggested as the acceptable precision (Table 1). When compared to other studies on exotic fruits [30], we can say that for organochlorine pesticides, for example, the analytical parameters, namely the LOD and LOQ, are much better in the present work. As for the BFR, a study in capsicum cultivars [23] already reported presents higher LOD and LOQ values than those obtained for Kiwano. Although the European Union legislation for pesticides [41] does not include the kiwano fruit, the analytical parameters obtained for this method meet the requirements. As for the other studied compounds, most of them are not included in the food legislation, despite being frequently detected in food products. As an example, EFSA recommends BFR monitoring studies in food samples [42]. ## 3.3. Kiwano Sample Analysis After the method validation, the optimized method was applied to evaluate possible contamination in kiwano samples. Since the study was carried out on the kiwano pulp, as it is the edible part, the results are presented by pulp mass. The screening of the 30 multiple contaminants in a total of 10 kiwano samples led to the identification and quantification of PCB 101 (5.1 µg kg−1 in the kiwano pulp) in a single sample. GC/MS analysis confirmed the presence of PCB 101 (Figure 6). It was also confirmed that, except for one sample, the kiwano fruit samples are safe in terms of 12 OCP, 6 OPP, 7 BFR, and 5 PCB studied. The presence of pesticides is well reported in the literature on fruits [28,43,44,45], concerning other contaminants, the works are less represented. However, PCBs, mostly associated with anthropogenic sources, have been reported in grapes, and other several fruits [46,47] and BFR in red fruits [24], capsicum cultivars [23], among others [48]. This work was performed in a small number of samples, and *Portugal is* still in the beginning regarding this crop. However, it shows the great importance of including these fruits in monitoring studies and that it should be extended to a larger number of samples from different production sites. Furthermore, the results suggest the importance of including other organic contaminants in monitoring studies on food samples in addition to pesticides. ## 4. Conclusions An analytical methodology based on an optimized QuEChERS technique was effectively applied for the simultaneous analysis of 30 multiple contaminants (12 OCP, 7 OPP, 5 PCB, and 7 BFR) in kiwano samples. The optimized QuEChERS procedure encompassed the study of two QuEChERS compositions (QuEChERS AOAC and EN) in addition to two d-SPE clean-up compositions (CL1 and CL2). Although matrix effects were observed, it was found that QuEChERS EN, in combination with CL2 clean-up, offered an improvement in overall extraction recovery of the multiple target contaminants. Based on these results, it can be concluded that analytical method optimization studies are crucial for the analysis of multiple compounds in complex matrices. The methodology meets the analytical requirements in terms of accuracy, sensitivity, and precision. The novelty of this study allows the evaluation of multiple contaminants in kiwano samples, ensuring their safe commercialization in terms of the presence of pesticides and other organic contaminants. The presence of PCB 101 in one kiwano fruit reinforces the need for monitoring studies of organic contaminants, such as PCBs and BFRs.
# Identification of a Link between Suspected Metabolic Syndrome and Cognitive Impairment within Pharmaceutical Care in Adults over 75 Years of Age ## Abstract The prevalence of metabolic syndrome (MetS) and cognitive impairment (CI) is increasing with age. MetS reduces overall cognition, and CI predicts an increased risk of drug-related problems. We investigated the impact of suspected MetS (sMetS) on cognition in an aging population receiving pharmaceutical care in a different state of old age (60–74 vs. 75+ years). Presence or absence of sMetS (sMetS+ or sMetS−) was assessed according to criteria modified for the European population. The Montreal Cognitive Assessment (MoCA) score, being ≤24 points, was used to identify CI. We found a lower MoCA score (18.4 ± 6.0) and a higher rate of CI ($85\%$) in the 75+ group when compared to younger old subjects (23.6 ± 4.3; $51\%$; $p \leq 0.001$). In the age group of 75+, a higher occurrence, of MoCA ≤ 24 points, was in sMetS+ ($97\%$) as compared to sMetS− ($80\%$ $p \leq 0.05$). In the age group of 60–74 years, a MoCA score of ≤24 points was identified in $63\%$ of sMetS+ when compared to $49\%$ of sMetS− (NS). Conclusively, we found a higher prevalence of sMetS, the number of sMetS components and lower cognitive performance in subjects aged 75+. This age, the occurrence of sMetS and lower education can predict CI. ## 1. Introduction The prevalence of both metabolic syndrome (MetS) and cognitive impairment (CI) is increasing with age [1,2]. According to the international classification of MetS [3] the prevalence of MetS ranged from $37\%$ up to $60\%$ in the elderly population [4,5,6]. Although cognitive impairments and dementia are often age-related disorders and according to World Health Organisation affect approximately 20–$25\%$ older population, they are not part of normal ageing [7]. In 2019 already over 55 million people worldwide suffer from cognitive disorders, AD, or dementia, and this number will almost double every 20 years, expect reaching 78 million in 2030 and 139 million in 2050 [7]. *In* general, MetS impairs overall intellectual functioning [1,2], and CI is the most significant factor of therapy failure in chronic disorders [8], mainly in older adults [9]. The presence of MetS, according to the classification of the International Diabetic Federation 2006 for the European population [3], can also be routinely evaluated in pharmaceutical care in a community pharmacy. For assessment of CI, Montreal Cognitive Assessment (MoCA) can be used as a simple, easy-to-use, but reliable cognitive screening tool [10,11] with high sensitivity for mild cognitive impairment [12]. Community pharmacists are the most accessible and frequently contacted healthcare professionals worldwide [13,14] who may play a crucial role in the identification of individuals with chronic disorders [15,16,17], including those suffering from cognitive disorders [18,19] in case that pharmacist is trained in the diagnosis of this type of disorder. Nowadays, common pharmaceutical care such as preparation, storage and dispensation of medicines, the provision of expert advice on their correct and safe use, or advice on the possibilities of non-pharmacological regimen measures is being globally expanded by other professional pharmacists’ competences (for example the monitoring of biochemical parameters, blood pressure measurement, management of obesity, smoking cessation, etc.) which are gradually becoming a part of pharmaceutical care worldwide which is more patient-oriented, defined as the expanded pharmaceutical care. Pharmaceutical care provided in nursing homes or senior care centres brings additional benefits to older adults [20,21]. Identification of potentially preventable risk factors (such as MetS and/or its components) and/or early stages of serious illnesses (e.g., cognitive impairment and dementia) within pharmaceutical care might help in slowing the rate of their progress and further disability [8,14,22]. Assessment of cognitive functions in elderly patients with MetS components is critical, but due to lack of time, it is routinely performed by only $24\%$ of general practitioners, although $82\%$ believe screening is needed [23]. Thus, the extension of pharmaceutical care toward cognitive screening might provide significant benefits for patients and the healthcare system. The association between MetS and CI appears to be age-dependent [24,25]. The presence and onset of cardiovascular risk factors for CI are crucial for vascular modifications that result in reduced cerebral blood flow and metabolism in the brain [26]. While younger old (60–74 years) may be more susceptible to the cardiovascular load imposed by MetS on central neural pathways regulating mental processes [25], on the other side, MetS might have a positive influence on health status in older old (75+) individuals [27,28]. In our pilot study, we focused on the risk of suspected MetS (sMetS) estimated when providing healthcare service by a pharmacist and its related CI in the elderly [10,11] and showed the feasibility of cognitive testing in pharmaceutical care and its potential in identifying sMetS subject affected by CI but we did not investigate the impact of MetS in different age groups of elderly patients. We concluded that a quick and simple cognitive assessment could be a helpful extension of pharmaceutical care [10,11]. As our previous findings showed: (i) $56\%$ of a random population over 60 years of age exhibited lower cognitive performance on the MoCA (ii) subnormal MoCA scores were significantly present with increasing age of the respondents, and (iii) the presence of MetS moderately but significantly correlated/associated negatively with the MoCA score [10]. Currently, in the same cohort as previously [10,11], we aimed to investigate whether sMetS has different effects on cognition in “younger old” (60–74 years) and “older old” (aged 75 years and over) individuals. Recent research reports diverse findings [1,22,24,29,30,31]. While MetS contributes to cognitive decline in “younger old” subjects [22,31], there is evidence that this effect may be weakened or vanished in 75+ individuals [24,29,30]. More detailed studies of the relationship between MetS and CI in the elderly population before the age of 75 and at the age of 75+ could have a global benefit [32], but further studies are needed. In this study, we aimed to investigate the impact of sMetS on cognition in aging individuals, with respect to the age category of 75+ years. Subjects were provided with pharmaceutical counselling, which means the specific patient-oriented pharmaceutical care service in community pharmacy targeted at the identification of components of MetS and MetS itself (according to IDF classification), including screening of cognitive features of enrolled older patients. We hypothesized that sMetS estimated within pharmaceutical care has a different influence on cognitive performance in a younger elderly population aged 60–74 years and in the 75+ population. We expected that younger old sMetS+ individuals will achieve significantly worse cognitive performance compared to the same age group without sMetS. On the other side, we expected that the cognitive performance in sMetS+ and sMetS− older old individuals will be either without difference or in the sMetS+ group only slightly weaker than in sMetS− group. ## 2.1. Study Settings, Design and Sample Size Here, we used data from a randomized pilot study in Slovakia [10,11], where 323 subjects were enrolled. Among them, 222 voluntary participants were interviewed in 16 community pharmacies, and 101 participants from 3 senior care centres aged 60 years and over were included, $63\%$ in the 60–74 years group and $37\%$ in the group 75+ (the age of the oldest participant was 95 years). ## 2.2. Study Participants and Selection The participants ($68\%$ women, $32\%$ men), who visited a community pharmacy or lived in a senior care centre (between February 2018–February 2019) in Slovakia and who were willing to provide their general input data (socio-demographic information) and the list of all chronically used medications with the codes for their chronic diseases. Participants were randomly selected on the base of their voluntary consent and physical and mental ability to undergo screening. All respondents completed a simple data collection form in the Slovak language comprised of socio-demographic information (age, gender, education level), smoking and physical activity habits, and presence or absence of abdominal obesity, mediated by a pharmacist. The basic characteristics of the cohort sample are displayed in Table 1. Subsequently, a cognitive screening by the MoCA test was performed by trained pharmacists. Exclusion criteria were severe physical or mental health conditions that interfered with cognitive screening test realization and/or incompletely filled data collection form. We excluded 42 incompletely filled data collection forms. The forms were collected for one year (February 2018–February 2019), and the study was approved by the Ethics Committee of Faculty of the Pharmacy, Comenius University in Bratislava (EK FaF UK $\frac{01}{2018}$). All procedures followed the relevant guidelines and regulations under the Declaration of Helsinki. ## 2.3. Classification of MetS and Assessment of Cognitive Function According to provided codes for patients’ chronic diseases and information about the present/absence of abdominal obesity, there were identified individual components of MetS. Suspected metabolic syndrome (sMetS) was assessed according to the International Diabetes Federation Worldwide Definition of MetS, 2005, modified for the European population [3]. Accordingly, patients were divided with respect to the presence (sMetS+) or absence of suspected MetS (sMetS−). The Montreal Cognitive *Assessment is* one of the available cognitive screening instruments, which scans seven cognitive domains: executive functioning; visuospatial abilities; language; attention, concentration and working memory; abstract reasoning; memory and orientation. The Slovak version of the Montreal Cognitive Assessment (MoCA) [8] with a reduced cut-off of ≤24 points for cognitive impairment by Bartos and Fayette was used [33] by pharmacists who were trained in the MoCA screening tool. Administration time was approximately 15 min, participant achieved a score between 0–30 points. ## 2.4. Statistical Analysis Data were analysed using the SAS Education Analytical Suite for Microsoft Windows, version 9.3 (Copyright © 2012 SAS Institute Inc., Cary, NC, USA). The continuous demographic and clinical variables of study groups (e.g., age, the MoCA score) were represented by simple arithmetic mean, standard deviation, or $95\%$ confidence interval. Categorical descriptive variables (e.g., sMetS status, MoCA status) were characterized by absolute frequencies and percentages. When comparing two groups with continuous data, a two-sample t-test was used. In addition, Pearson’s Chi-Square test and Fisher’s exact test of cross-tabulated data were performed to analyse the association between frequencies of categorical variables. The 0.05 significance level was used as a threshold for statistical significance for all tests, and 0.8 was taken as a minimally acceptable power of tests. Exogenous variables are independent of the error term (e.g., metabolic symptoms and cognitive function) and they may have a significant impact on the validity of the measurement. We investigated these terms by standard procedures of regression diagnostics and control procedures were applied, like sample randomization and matching, and finally the ANOVA method was used as a statistical control to reduce the possible effect of extraneous variables. We used random allocation which is a technique that minimizes confounders and eliminates systematic bias by allocating individuals for treatment and control groups solely by a chance. We chose this method for its simplicity and effectiveness in eliminating distortion. Due to the pilot nature of the study, we did not perform an exact a priori calculation of the number of participants according to the case-control methodology. However, the power of the performed tests was controlled by appropriate post hoc calculations. We also suggested simple predictive analytics to forecast the impact of patients’ age, sMetS status, and education level on cognitive performance in the MoCA test. As exclusive predictors in this model, the age (dichotomic groups 60–74 years vs. 75+), sMetS status (sMetS+/sMetS−) or MetS components (central obesity, high blood pressure, dyslipidaemias, diabetes mellitus 2) and education level (dichotomic groups “lower education” for 12 years and less, vs. “higher education” for 13 years and more, were used. The calculated output data were the MoCA status (MoCA normal/MoCA lower cognitive performance). The success score of the prediction model was expressed by the evaluation of the confusion matrix in percentage. ## 3.1. Prevalence of sMetS and Cognitive Impairment The prevalence of sMetS in the study cohort was $18.5\%$ in 60–74 years participants and $27\%$ in 75+ (NS). On average, individuals 75+ achieved significantly lower MoCA score (18.4 ± 6.0) than patients aged 60–74 (23.6 ± 4.3). Lower cognitive performance (MoCA score ≤24) was more frequent in 75+ ($85\%$) vs. participants aged 60–74 years ($51\%$; $p \leq 0.001$). In both subcohorts (60–74 years vs. 75+), age had a significant influence on cognitive performance ($p \leq 0.05$; vs. $p \leq 0.001$, respectively). ## 3.2. Occurrence of sMetS and Patients’ Cognitive Performance sMetS influenced MoCA score in 75+ seniors (see Figure 1) as we found a significantly higher occurrence of lower cognitive performance in MoCA in 75+ with sMetS ($97\%$), when compared to 75+ sMetS− group ($80\%$; $p \leq 0.05$; r2 = 0.063), the difference was −1.99 points in MoCA mean (NS). In contrast, the MoCA score in younger seniors was unaffected by the presence of sMetS. In participants aged 60–74 years, the prevalence of lower cognitive performance according to MoCA was $63\%$ in the sMetS+ group and $49\%$ in sMetS− (NS; the difference was −1.21 points in MoCA mean, NS). ## 3.3. Number of MetS Components and Patients’ Cognitive Performance 75+ individuals had a significantly higher number of MetS components (2.2 ± 0.9) than 60–74 participants (1.6 ± 1.1; $p \leq 0.001$) in both age groups, however, the number of MetS components was not associated with patients’ cognitive performance in MoCA. ## 3.4. Association between a MetS Status, Age, Education Level and Cognitive Performance We proposed here a simple predictive model (see Figure 2) using three input categorical components, such as an occurrence or absence of sMetS and affiliation with a given age group (60–74 vs. over 75 years) and the observed output data expressed by the cognitive performance group (below or above the norm) with the success rate of classification of $73\%$ ($p \leq 0.001$). The odds ratio for the age group 75+ against the youngers was 5.54; Cl $95\%$ = 3.24–9.83 ($p \leq 0.001$), and this parameter for the occurrence of sMetS against the missing metabolic syndrome was as high as 2.04; CI $95\%$ = 1.11–3.87 ($p \leq 0.05$), respectively. The odds ratio for the lower education group against the higher was 3.88; CI $95\%$ = 1.87–8.46 ($p \leq 0.001$). We also performed an alternative predictive model based on the number of MetS components and patients’ cognitive performance expressed on the MoCA scale. The results of this model predicted a negative impact on the cognitive performance given by MoCA levels with the increasing number of MetS components ($r = 0.44$; $p \leq 0.05$) at the success rate of classification of $61\%$. The addition of other input parameters (gender, physical activity, smoking habits) that were available in the research did not improve the quality of the model. ## 4. Discussion Previously, in a pilot study investigating the feasibility of cognitive screening within extended pharmaceutical care in elderly patients with sMetS [10,11], we reported that the population over 60 years of age exhibits lower cognitive performance in MoCA test and subnormal MoCA scores are significantly present with increasing age of study participants. In this investigation which widens previous findings, we hypothesized that sMetS has a different influence on cognitive performance in the younger elderly population aged 60–74 years and the 75+ population. The main results of the present study are as follows: (i) Presence of sMetS did not have a significant effect on achieved MoCA score in elderlies aged 60–74 years; (ii) sMetS has, thought moderate but significant, effect on achieved MoCA score in participants aged 75 years and more. ## 4.1. Prevalence of MetS and Cognitive Impairment in Elderly Several recent studies reported that MetS increases the risk of developing CI or dementia for elderly patients aged 60–75 [22,31] but not in the 75+ elderly population [25,28,34]. These outcomes may have been related to a survival bias because participants with more severe MetS may have passed away earlier than reaching the older age [28]. Our findings did not show an association between sMetS and lower MoCA scores in participants aged 60–74 years compared to age-matched patients without sMetS. The potential explanations of controversy may lie in the possible influence of single MetS components as they strongly correlate with lower cognitive performance [2]. We can only speculate that there could be a more significant substantial influence of age than sMetS on CI in younger seniors. ## 4.2. Prevalence of MetS and Cognitive Impairment in Younger Elderly Patients Recent studies conferred that MetS-related CI that has been observed in younger elderly participants aged 60–74 years [22,31] tends to diminish after reaching age 75+ [34] and can disappear or reverse in an oldest-old cohort [29,30]. Instead, our results showed the opposite, i.e., a minimal but significantly higher occurrence of MoCA ≤ 24 points in 75+ subcohort with sMetS when compared to the 75+ sMetS− group. Decelerated CI related to MetS was shown in the 75+ cohort [28], mainly in 85+ [30]. ## 4.3. Prevalence of MetS and Cognitive Impairment in Older Elderly Patients The presence of MetS in 75+ may be a protective evolutionary factor against the harmful aging process [28], and it may also have survival benefits in 75+ individuals with cardiovascular diseases [27,35]. Individuals with cardiovascular diseases who reached the age of 85+ may be relatively less susceptible to the adverse effects of MetS and its components [29,30]. Late-life MetS can also suppress the effects of other risk factors for the deterioration of cognitive features, such as malnutrition [36]. Weight loss may be a potential risk factor for CI or Alzheimer’s disease and a part of the process of dementia [37]. Our findings support the hypothesis that the effect of MetS on cognitive function with advancing age (after 75 years) is relatively weakened and that individuals with components of MetS aged 85+ years are probably more resistant to the effect of MetS on cognition. ## 4.4. Coexistence of the Three Risk Factors: Occurrence of MetS, Age 75+, Lower Education Predicts Lower Cognitive Performance Our predictive model for estimation of CI status was able to discriminate between individuals with (MoCA score ≤ 24) and without impaired cognitive functions (MoCA score >24) using three simple variables— the age group (60–74 vs. over 75 years, presence or absence of MetS and lower and higher education level) and this was superior to the predictive model using the number of MetS components. It might represent a simple tool for pharmacists to identify risk patients for CI who could need an individual approach in pharmaceutical care, e.g., control and management of modifiable risk factors for CI, revision of the medical list, and management of medication with potential risk for CI. Risk patients for CI also may undergo cognitive screening in a pharmacy and then be advised to visit a specialist when needed. Although previously suggested predictive models [38,39] reached higher predictive performance than ours, they used various parameters such as subjective well-being, educational level, marital status, and the presence of other chronic diseases obtained within the medical examination. The advantage of our predictive model lies in applying a few easy predictors to collect within routine pharmaceutical counselling. ## 4.5. Possible Pathological Background Explaining the Link between sMetS and CI Previously [10], we reported an influence of the individual sMetS components, type 2 diabetes mellitus, hypertension and obesity, but not dyslipidaemias, on lower cognitive performance. This is also relevant to current findings. First, numerous epidemical studies supported that diabetes is closely related to a higher risk of cognitive decline [40], including mild cognitive impairment and dementia. At the same time, cognitive dysfunction is increasingly recognised as an important comorbidity and complication of diabetes that affects patients’ quality of life, diabetes self-monitoring, and is related to diabetes treatment-related complications [41]. Watts and colleagues [42] reported that insulin is an important predictor of cognitive performance and decline, in opposite directions. In healthy older patients with normal cognition, higher insulin predicted greater cognitive impairment on attention and verbal memory. In contrast, in the group with early Alzheimer’s disease, higher insulin was associated with better cognitive performance in attention and verbal memory. *In* general, hyperglycaemia is associated with lower cognitive abilities and with a prevalence of mild cognitive impairment in elderly subjects [2] and achieved a score in test Mini-Mental State *Examination is* negatively correlated with fasting hyperglycaemia in the elderly population [2]. Diabetes is in close association with a high risk for hyperglycaemia and hypoglycaemia events, mainly in the elderly, which may be caused by the disease itself or by the glucose-lowering medication and may lead to impairment of cognitive features. Cognitive dysfunction can also predict these complications. Early identification of individuals, particularly in older age, with mild cognitive decline and adequate intervention, can improve adherence and may help to avoid later complications [41]. Second, a number of studies unveiled a relationship between high blood pressure and cognition in the elderly population. Their results showed a significant association between elevated blood pressure and lower cognitive performance in older subjects [2,43]. Combination of hypertension in midlife and low diastolic blood pressure in late-life were in relationship with reduction of brain volume and lower cognitive performance in the aging population [44,45]. In addition, longitudinal study demonstrated that long duration hypertension predicted cognitive decline independent of age [46]. In line with this, women at the age of 75 years had faster declines in global cognition associated with higher systolic blood pressure and lower diastolic blood pressure [47]. Third, also a relationship between obesity and worsened cognitive performance was investigated by many studies though outcomes are controversial. While being overweight is related to a lower risk for cognitive decline in the elderly population, central obesity increases the risk for it [48]. While obesity, as a component of MetS, in young and middle age means a risk factor for cardiovascular and cerebrovascular events [49], likewise weight loss later in life can mean an early warning signal for both development of Alzheimer’s disease and mild cognitive impairment [37]. The possible explanation may lay in a possible key link between obesity, but also other components of MetS, and cognitive decline as a consequence of inflammation and oxidative stress in the brain tissues [50]. ## 4.6. Limitations Our study has certain limitations in addition to cohort size. First, we used only the medication list of patients and diagnoses on the prescription to identify sMetS components. Second, pharmacotherapy of other possible morbidities was not analysed. Also, possible biases might occur. The main sources of probable data distortion in our research are selection, information, and confounding bias. We assume the most significant contribution of selection bias. It is well known that age, education and estimated premorbid intelligence correlate significantly with the total MoCA score. Since it was a pilot study, the extent of these individual contributions was not estimated. ## 5. Conclusions We found a higher prevalence of sMetS, the number of sMetS components and lower cognitive performance in MoCA in patients aged 75+. We confirmed the hypothesis that advancing age has a significant influence on cognition in both age groups (60–74 years vs. 75+). We observed a moderate but significant link between sMetS and CI exclusively in individuals aged 75+ but not in younger old participants. This finding confirms that metabolic syndrome substantially contributes to loss of cognitive performance during senescence, and it should also be considered when providing pharmaceutical services, particularly in adults aged 75+. Considering that forgetfulness or impaired memory is a common reason for low adherence in the elderly, early identification of elderly patients with potential cognitive impairment can help control modifiable risk factors for CI, prevent irregular medication use or non-adherence to medication and thus delay further complications.
# Assessment of the Possibility of Using the Laryngoscopes Macintosh, McCoy, Miller, Intubrite, VieScope and I-View for Intubation in Simulated Out-of-Hospital Conditions by People without Clinical Experience: A Randomized Crossover Manikin Study ## Abstract The aim of the study was to evaluate the laryngoscopes Macintosh, Miller, McCoy, Intubrite, VieScope and I-View in simulated out-of-hospital conditions when used by people without clinical experience, and to choose the one that, in the case of failure of the first intubation (FI), gives the highest probability of successful second (SI) or third (TI). For FI, the highest success rate (HSR) was observed for I-View and the lowest (LSR) for Macintosh ($90\%$ vs. $60\%$; $p \leq 0.001$); for SI, HSR was observed for I-View and LSR for Miller ($95\%$ vs. 66,$7\%$; $p \leq 0001$); and for TI, HSR was observed for I-View and LSR for Miller, McCoy and VieScope ($98.33\%$ vs. $70\%$; $p \leq 0.001$). A significant shortening of intubation time between FI and TI was observed for Macintosh (38.95 (IQR: 30.1–47.025) vs. 32.4 (IQR: 29–39.175), $$p \leq 0.0132$$), McCoy (39.3 (IQR: 31.1–48.15) vs. 28.75 (IQR: 26.475–35.7), $p \leq 0.001$), Intubrite (26.4 (IQR: 21.4–32.3) vs. 20.7 (IQR: 18.3–24.45), $p \leq 0.001$), and I-View (21 (IQR: 17.375–25.1) vs. 18 (IQR: 15.95–20.5), $p \leq 0.001$). According to the respondents, the easiest laryngo- scopes to use were I-View and Intubrite, while the most difficult was Miller. The study shows that I-View and Intubrite are the most useful devices, combining high efficiency with a statistically significant reduction in time between successive attempts. ## 1. Introduction Ensuring airway patency is the primary task of a paramedic in a patient with symptoms of respiratory failure [1]. It enables the delivery of oxygen to the lungs and the elimination of carbon dioxide from the body [2]. Various devices are used to obtain airway patency, e.g., oropharyngeal, nasopharyngeal, or supralaryngeal airway devices. However, the gold standard to ensure airway patency and at the same time to protect the lungs against the aspiration of food content is endotracheal intubation [2]. Correct intubation requires not only theoretical knowledge but also considerable manual skills, which deteriorate if not constantly improved [3]. This especially applies to people who do not perform it on a daily basis [1]. In out-of-hospital conditions, endotracheal intubation is most often performed at the ground level in conditions requiring the adoption of non-physiological and non-ergonomic body positions, often in unfavorable environmental conditions. This results in a significantly reduced level of comfort for the professional, which together with the stressful situation related to the patient’s life-threatening condition and responsibility for his or her health may translate into the effectiveness of intubation [4]. Difficult or failed tracheal intubation is a well-known cause of morbidity and mortality associated with anesthesia and emergency medicine [5]. It has been proven that repeated intubation attempts are associated with an increased incidence of adverse events [6], transport delay, prolonged hospitalization, poorer neurological outcomes [7] and increased mortality [8]. In the hospital setting, video laryngoscopy has been shown to reduce the number of failed intubations, improve the view of the glottis, and reduce airway trauma [1]. However, there are only a few heterogeneous studies comparing video laryngoscopy and direct laryngoscopy in the pre-hospital setting [9]. Moreover, in pre-hospital care, the success of intubation depends not only on the type of laryngoscope used, but also on the training and experience of the healthcare provider with the device. All these factors result in prolonged intubation, when intubation in out-of-hospital conditions are performed by people with little experience [10]. Therefore, it seems reasonable to search for a device whose use by people with little or minimal clinical experience will result in the most effective and quickest endotracheal intubation, and at the same time will result in the shortest learning effect in the event of potential failures [4]. The aim of the study was to assess the possibility of using the following laryngoscopes, Macintosh, Miller, McCoy, Intubrite, VieScope and the I-View video laryngoscope, in simulated out-of-hospital conditions by providers without clinical experience, and to choose the laryngoscope among them that, in the case of a failed first intubation, offers the greatest possibility of successful second or third intubation as soon as possible. The secondary aim was to assess the learning and teaching aspect of laryngoscopy for paramedics regarding the third attempt of intubation using videodevices or other laryngoscopes. In the available literature, there are little data comparing intubation times in consecutive intubation attempts. It seems to us that there is quite a significant dependency conditioning the potential usefulness of a given device in medical rescue, especially when it is used by people without clinical experience, as repeated, prolonged intubation attempts are associated with a later poor prognosis in patients [7]. ## 2.1. Materials In the study, we compared the majority of laryngoscopes available on the market that enable direct laryngoscopy, Macintosh (HEINE Optotechnik GmbH & Co. KG, Gilching, Germany), Miller (Scope Medical Devices Pvt. Ltd., Ambala City, India), McCoy (McCoy Truphatek, Jerusalem, Israel), Intubrite® (LLC; Vista, CA, USA), VieScope® (Adroit Surgical, Oklahoma City, OK, USA) with a dedicated 15 Fr Voir Bougie guidewire, and I-View™ VL video laryngoscope (Intersurgical Ltd., Wokingham, Berkshire, UK), in a simulated out-of-hospital setting when used by people with little clinical experience on a manikin model (Laerdal Airway Management Trainer Stavanger Norway manikin of universal difficulty) (Scheme 1.). Endotracheal tubes No. 7 were used for intubation. In each case, the endotracheal tubes and guides were covered with a standard lubricant dedicated to simulators. Simulated out-of-hospital conditions were created by placing the manikin in a neutral position at floor level. ## 2.2. Study Design The study was conducted from 21 February 2021 to 8 June 2021 at the Norbert Barlicki University Teaching Hospital No. 1 in Lodz. Sixty randomly selected students in the third year of Paramedic Science, full-time first-cycle studies at the Medical University of Lodz, qualified for the study. All students signed informed consent for voluntary participation in the study. The exclusion criterion was prior clinical experience with the laryngoscopes used in the study. All participants listened to a 45 min lecture on the construction of laryngoscopes and the principles of using them, as well as the anatomical structure and the method and technique of intubation. After the presentation, the instructor presented the correct intubation with each of the 6 tested laryngoscopes. Then, under the supervision of the teacher, the students participated in the workshop where they had the opportunity to intubate a manikin placed on the operating table at the optimal height for each participant with each of the tested laryngoscopes. After a month, 60 students took part in the actual study. ## 2.3. Study Protocol After signing their informed voluntary consent to participate in the study, the following demographic and medical data of the test participants were recorded in pseudonymized form:SexAgeExperience level: the number of dummy intubations performed so far by the subject and which laryngoscopes were used for previous intubations. Participants were asked to perform three endotracheal intubations on a certified airway training manikin (Laerdal Airway Management Trainer Stavanger Norway, universal difficulty) placed at floor level in a neutral position (out-of-hospital simulation), using each of the evaluated laryngoscopes. Each participant used all devices in random order in a crossover arrangement. The order in which the laryngoscopes were used was randomized using sealed opaque envelopes. The locked randomization strategy was generated using the Randomizer Program (randomizer.org). Flow diagram is presented in Figure 1. Timing began with taking the laryngoscope and ended with initial ventilation with a resuscitation bag after placement and sealing of the endotracheal tube. Intubation was considered successful after confirming the breathing movements of the manikin’s lungs. The attempt was defined as a failure in the absence of manikin breathing movements or for an intubation time of more than 60 s. The criterion of over 60 s defining the intubation attempt as unsuccessful was adopted due to the fact that the study was to assess the usefulness of the devices by people without clinical experience in intubation. After each intubation attempt with a given laryngoscope, two subsequent intubation attempts with the same device were made. After the completion of three intubations with a given laryngoscope, there was a break of at least 2 h (in order to eliminate the impact of intubation with a given laryngoscope on the use of the next device). After the break, the subject proceeded to three intubations of the manikin with a randomly selected device. The subject assessed intubation with a given laryngoscope on the basis of a subjective assessment of tracheal intubation difficulty (number rating scale 0–10, 0: no difficulty, 10: highest difficulty). The following data were pseudonymously recorded for all simulations:Success of intubation, position of the tube: tracheal vs. esophageal (primary endpoint);Comparison of times to ventilation in the first, second, and third intubation attempts (secondary endpoint);Feelings of subjects (secondary endpoint). ## 2.4. Statistical Analysis The distribution of continuous data was checked with the Shapiro–Wilk test. As the average time of intubation has a distribution other than normal for at least one laryngoscope ($p \leq 0.05$), continuous data were presented as median with IQR. Furthermore, the dependencies between them were assessed with the Kruskal–Wallis test with Dunn’s post hoc tests. Dependencies for dependent data (comparisons between approaches) were assessed with the usage of the t-student test for dependent data in the case of normal distribution and Wilcoxon’s test in other cases. In both cases, the Bonferroni correction was used. Nominal data were present as n (% of total) and assessed with a test chosen based on the size of the smallest subgroup. The statistical analysis was performed using Statistica 13.1PL (StatSoft, Poland, Krakow). ## 3.1. Demographic and Contextual Data The study included 60 third-year students of Paramedic Science (18 women and 42 men). The average age of the respondents was 22 years. Among the surveyed, 21 students had intubated the manikin fewer than 10 times so far, 22 students had performed between 10 and 20 only manikin intubations so far, and 17 students had performed more than 20 only manikin intubations. Before, everyone had used only the Macintosh laryngoscope for only manikin intubation. ## 3.2. Primary Endpoint For the first intubation, the highest success rate was observed for the I-View laryngoscope and the lowest for the Macintosh laryngoscope: 54 ($90\%$) vs. 36 ($60\%$; $p \leq 0.001$). In the case of the second intubation, the highest success rate was observed for the I-View laryngoscope and the lowest for the Miller laryngoscope: 57 ($95\%$) vs. 40 ($66.7\%$; $p \leq 0.001$). In the case of the third intubation, the highest success rate was again observed for the I-View laryngoscope, and the lowest this time for the Miller laryngoscope, McCoy laryngoscope and VieScope laryngoscope: 59 ($98.33\%$) vs. 42 ($70\%$; $p \leq 0.001$; see Table 1). There were no significant dependencies in the success rate between first and second attempts, second and third attempts, and first and third attempts (see Figure 2). Comparing all laryngoscopes, the highest intubation efficiency was obtained for the I-View laryngoscope ($90\%$, $95\%$, $98.33\%$), followed by the Intubrite laryngoscope (83.33, $88.3\%$, $91.67\%$) and the VieScope laryngoscope ($65\%$, $80\%$, $70\%$). The effectiveness of the remaining laryngoscopes, Macintosh, McCoy and Miller, oscillated between $60\%$ and $73.33\%$ (see Table 1). An increasing learning curve in the use of the tested laryngoscopes was observed only for laryngoscopes I-View and Intubrite (see Figure 2). ## 3.3. Secondary Endpoints There were significant differences between the mean time of intubation with the usage of the aforementioned laryngoscopes ($p \leq 0.001$). The statistically significant results of the performed post hoc Dunn’s test are shown in Figure 3. A significant shortening of intubation time between the first and the third intubation was observed for the Macintosh laryngoscope (38.95 (IQR: 30.1–47.025) vs. 32.4 (IQR: 29–39.175), $$p \leq 0.0132$$), McCoy laryngoscope (39.3 (IQR: 31.1–48.15) vs. 28.75 (IQR: 26.475–35.7), $p \leq 0.001$), Intubrite laryngoscope (26.4 (IQR: 21.4–32.3) vs. 20.7 (IQR: 18.3–24.45), $p \leq 0.001$), and I-View laryngoscope (21 (IQR: 17.375–25.1) vs. 18 (IQR: 15.95–20.5), $p \leq 0.001$). Additionally, a significant shortening of intubation time between the first vs. second attempt and the second vs. third attempt was observed only for Intubrite and I-View laryngo scopes. In the case of the McCoy laryngoscope, a significant improvement was observed between the second and third approaches and the first and third approaches (see Figure 4). According to the respondents, the easiest laryngoscope to use was the I-View laryngoscope, then the Intubrite, Macintosh, and McCoy, and finally the two laryngoscopes with straight blades: Miller and VieScope (see Figure 5). ## 4. Discussion A significant reduction in intubation time between the first and third intubations was observed for the Macintosh laryngoscope, the McCoy, Intubrite laryngoscope and I-View laryngoscope. In addition, a significant reduction in intubation time between the first and second attempts and the second and third attempts was observed only with the Intubrite and I-View laryngoscopes. For the McCoy laryngoscope, there was a significant improvement in intubation times between the second and third attempts and the first and third attempts. The I-View laryngoscope turned out to be the easiest device to use in relation to the feelings of the subjects. This is probably due to the fact that there is no need to keep a straight line between the eyes of the professional and the glottis. In simulation, where the manikin was intubated at the floor level, the lack of the need to maintain this line is important because it does not require the intubating person to assume a more forced, bent body position, which is uncomfortable and non-ergonomic [3]. In the case of the I-View laryngoscope, the possibility of evaluating the view of the glottis thanks to the device’s monitor makes the assumed body position less bent and more friendly to the examined person [3]. This is essential when a patient is intubated by people without experience in airway management. In this situation, if there is a choice between a Macintosh laryngoscope and video laryngoscopes, including I-View, some authors suggest choosing the latter [11]. In the case of intubation by anesthesiologists, Wakabayashi believes that despite the fact that video laryngoscopes give better visibility of the glottis and are easier to use, the effectiveness and times of intubation with a classic Macintosh laryngoscope are at an acceptable level. This is vital given the widespread availability of Macintosh laryngoscopes and the still limited availability of video laryngoscopes [12]. Among the video laryngoscopes, some authors suggest that the I-View laryngoscope is a suitable device for use in difficult conditions of pre-hospital care due to its ease and single use [13]. In their study, Maritz et al. showed that the use of video laryngoscopy provided better intubation conditions, enabled better visualization of the glottis, and thus facilitated intubation when used not only by anesthesiologists with extensive experience in conventional and video laryngoscopy, but also paramedics with little previous experience in conventional and non-conventional experience in video laryngoscopy [10,14]. Although the use of video laryngoscopes did not affect the success of intubation among anesthesiologists, in the hands of paramedics with little experience in intubation it reduced the failure rate from $14.8\%$ for the conventional Macintosh laryngoscope to $3.7\%$ for the video laryngoscope [10]. The high position of the Intubrite laryngoscope is probably related to the new, ergonomic handle of this laryngoscope [3]. The introduction of more ergonomic devices would reduce the professional’s workload, which is an important factor determining patient safety [5,15,16,17]. This applies in particular to people with little experience in intubation, in whom potential intubation difficulties may occur more often, especially in the group of obese patients. These patients, due to their physique and anatomy of the airways, may require greater strength to open the airways [18]. According to J. Tesler and J. Rucker, when the Intubrite laryngoscope is used in out-of-hospital conditions the percentage of the need for repeated intubation attempts and the percentage of tooth damage decreased compared to the Macintosh laryngoscope [4]. Similar results were obtained by T. Gaszyński, who stated that in the case of the Intubrite laryngoscope the patient’s body is less traumatized compared to Macintosh laryngoscope [19]. Macintosh and McCoy laryngoscopes in our study had similar first intubation success rates of $60\%$ and $65\%$, respectively, second intubation success rates of $73.3\%$, and third intubation success rates of $73.3\%$ and $70\%$, respectively. Furthermore, both laryngoscopes showed a significant improvement in intubation time between the first and third attempts. Moreover, McCoy laryngoscope enabled improvement between the second and third attempts. Therefore, in the case of failure of the first intubation, they give a chance for the correct placement of the endotracheal tube by people without clinical experience in subsequent attempts. However, in terms of average intubation times, both laryngoscopes were inferior to the I-View and Intubrite laryngoscopes, yet the Macintosh laryngoscope turned out to be easier to use in our study. There are different opinions in the literature regarding clinical situations in which one of these two laryngoscopes is more useful than the other. In a similar research model in which inexperienced medical students intubated manikins with Macintosh and McCoy laryngoscopes, Higashizawa found that the time needed to correctly position the endotracheal tube was similar with both laryngoscopes but the McCoy laryngoscope was more difficult to operate. The author suggested that the Macintosh laryngoscope is more useful for teaching inexperienced medical students [18], whereas Yildirim showed that the use of the McCoy laryngoscope shortens and provides easier intubation than the use of the Macintosh laryngoscope [20]. However, Sethuraman came to different conclusions, stating that there is no advantage in using the McCoy laryngoscope over the Macintosh laryngoscope in the examination on manikins with difficult airways [21]. In turn, in patients with limited mobility of the cervical spine, Uchida showed that the McCoy laryngoscope facilitates intubation compared to the Macintosh laryngoscope [22] and it is also superior to some videolaryngoscopes [23]. Similar conclusions were drawn by Gabbott and Maharaj [24,25]. However, the latter author believes that, although the McCoy laryngoscope improves the visualization of the larynx more than the Macintosh laryngoscope in patients with both normal and difficult airways, reducing the number of intubation attempts and the number of optimization maneuvers required, it has proven to be more difficult and less reliable than the Macintosh laryngoscope [25,26,27,28,29,30,31]. In patients with morbid obesity, Nandakumar et al. found the McCoy laryngoscope to be as effective as the Macintosh laryngoscope, and concluded that due to its widespread availability and familiarity the latter laryngoscope should be used in this group of patients [26]. In our study, the successful first, second, and third intubation rates with the Miller laryngoscope were $73.3\%$, $66.7\%$, and $70\%$, respectively. There was no statistically significant reduction in intubation time between successive intubation attempts. It also turned out to be the most difficult laryngoscope to use among our subjects. Such a distant position of this laryngoscope in our list is probably due to the fact that the need to maintain a straight line between the subject’s eye and the entrance to the airway in the case of intubation of a manikin lying at the floor level requires adopting the least comfortable position of the body. The lack of or little possibility of lifting the epiglottis when using this laryngoscope also affects the effort of the professional. Vidhya came to different conclusions, believing that the Miller’s laryngoscope enables much better visualization of the larynx than the McCoy and Macintosh laryngoscope, even in patients with difficult airways [31]. Similarly, Achen claimed that Miller’s laryngoscope enabled better visualization of the airway entrance than the Macintosh laryngoscope, and therefore everyone should learn laryngoscopy using both laryngoscopes [32]. This is important because, according to other authors, although the view of the glottis was better with the Miller laryngoscope than with the Macintosh laryngoscope, intubation conditions turned out to be better with the Macintosh laryngoscope [33,34]. The Miller laryngoscope was superior to the Macintosh and McCoy laryngoscope for visualizing the glottis in children [35,36]. The VieScope laryngoscope, a variant of the Miller laryngoscope requiring two-stage intubation, was found to be similarly effective during the first intubation as the McCoy and Macintosh laryngoscopes: $65\%$, $65\%$, and $60\%$, respectively. For the second intubation, its effectiveness increased to $80\%$ and approached that of the Intubrite laryngoscope ($88.3\%$), while during the third intubation, its effectiveness decreased to $70\%$. There was no statistically significant difference between intubation times in consecutive trials. According to the respondents, this device was also as difficult to use as the Miller laryngoscope. Such a low rank of this laryngoscope, and likewise the Miller laryngoscope, may result from the need to maintain the line of the intubating eye to the entrance to the airway and the need to adopt a more strenuous body position compared to the I-View, Intubrite, McCoy, and Macintosh laryngoscopes. The VieScope laryngoscope was originally designed for battlefield medicine, to facilitate the intubation of patients with difficult airways by being always ready for use and by focusing light on target tissues. This was confirmed in Maślanka’s study, which showed that, taking difficult airways into consideration, the VieScope laryngoscope compared to the Macintosh laryngoscope had a shorter intubation time and a higher success rate on the first attempt [37]. Similar conclusions were drawn by Wieczorek et al., who compared the use of bébé VieScope and direct laryngoscopy during emergency intubation on a model of a pediatric manikin performed by paramedics with and without personal protective equipment [38]. In their prospective, multicenter, randomized study, Szarpak et al. proved that the VieScope laryngoscope enables more effective and faster intubation than the Macintosh laryngoscope in patients with suspected or confirmed diagnosis of COVID-19, who required pre-hospital cardiopulmonary resuscitation. In these studies, the study group consisted of paramedics with clinical experience and the ability to use various laryngoscopes. In our case, there was no scenario imitating difficult airways, which could result in the lack of advantage of this laryngoscope over other devices [39]. Additionally, the study group consisted of people without clinical experience. Another difficulty for the participants in the study was the fact that it requires two stages to intubate, which can make it difficult for inexperienced people to use. This translated into a result similar to that of the Miller laryngoscope in terms of reported subjective intubation difficulties. Similar conclusions were reached by Ecker et al., who conducted their study on a manikin under simulated conditions of massive regurgitation. In the case of patients with lower esophageal sphincter insufficiency, intubation with the VieScope laryngoscope compared to the Macintosh laryngoscope turned out to be longer, similar to our study, and resulted in a greater amount of aspirated content into the airways. The study group consisted of experienced anesthesiologists, i.e., people who perform intubation on a daily basis and have experience in solving various situations that may occur during intubation [40]. The longer intubation time of the VieScope laryngoscope compared to other airway devices was again noted by Ecker when he compared it to the Glidescope video laryngoscope in both simulated normal and difficult airways [41]. The prolongation of intubation time using the VieScope laryngoscope was also found in the case of intubation of patients qualified for elective surgical procedures, with no advantage of this laryngoscope over the Macintosh laryngoscope in this group of patients [42]. The study showed that it is necessary to constantly practice methods of airway management, including endotracheal intubation [27,28,29]. It is particularly important to learn how to use multiple laryngoscopes, as it may be useful in unconventional situations requiring the modification of technique, equipment or body position [33]. Each exercise in this area reduces the risk of making a mistake, reduces the stress of people performing a given procedure and, most importantly, increases the chance of survival of the patient and their return to the state before the event [33]. A similar conclusion was drawn by Pieters et al. from their study comparing seven videolaryngoscopes in manikin settings [42]. They compared the Macintosh classic laryngoscope, Airtraq, Storz C-MAC, Coopdech VLP-100, Storz C-MAC D-Blade, GlideScope Cobalt, McGrath Series5, and Pentax AWS. They observed 65 anesthetists, 67 residents in anesthesia, 56 paramedics and 65 medical students, intubating the trachea of a standardized manikin model. The results underline the importance of variability in device performance across individuals and staff groups, which has important implications for which devices hospital providers should rationally use. It is proven that videolaryngoscopes offer a better view of the entrance to larynx [43], and therefore reduce the risk of possible injuries related to intubation efforts [44]; however, training is still needed to avoid possible problems with the use of videolaryngoscopy [45,46]. Using these tools for learning purposes for unexperienced providers, in addition, may provide greater applicability [43,47,48]. The study has several limitations. Firstly, it was conducted on a manikin model, where simulated out-of-hospital conditions were created by placing the manikin at floor level, without the influence of other external factors affecting the effectiveness of intubation. Secondly, difficult airway scenarios were not also studied. Finally, the study group consisted of Paramedic Science students who, nevertheless, had little previous experience in intubating a dummy with a Macintosh laryngoscope due to their limited years of study. ## 5. Conclusions Taking into account the results of the study, the I-View and Intubrite laryngoscopes turned out to be the most useful devices for intubation in simulated out-of-hospital conditions by people with no clinical experience. They combined high efficiency of intubation with statistically significant shortening of intubation times between successive attempts. Due to the small study group and the manikin model, additional studies should be conducted on a larger group of subjects. ## Figures, Scheme and Table **Scheme 1:** *From the left: Macintosh laryngoscope, McCoy laryngoscope, Miller laryngoscope, VieScope laryngoscope, Intubrite laryngoscope, I-View laryngoscope.* **Figure 1:** *Flow chart. Each participant performed intubation in all settings in a randomized controlled order. There were no drop-outs.* **Figure 2:** *Graph of the percentage success of intubation with a given laryngoscope in subsequent attempts.* **Figure 3:** *The mean time of intubation in different intubation approaches (Kruskal–Wallis test: $p \leq 0.001$, presented p are taken from the Dunn’s test).* **Figure 4:** *Graph of mean intubation times with a given laryngoscope in subsequent intubation attempts.* **Figure 5:** *The feelings of the respondents (0—no difficulties; 10—maximum difficulties).* TABLE_PLACEHOLDER:Table 1
# 11,12-EET Regulates PPAR-γ Expression to Modulate TGF-β-Mediated Macrophage Polarization ## Abstract Macrophages are highly plastic immune cells that can be reprogrammed to pro-inflammatory or pro-resolving phenotypes by different stimuli and cell microenvironments. This study set out to assess gene expression changes associated with the transforming growth factor (TGF)-β-induced polarization of classically activated macrophages into a pro-resolving phenotype. Genes upregulated by TGF-β included Pparg; which encodes the transcription factor peroxisome proliferator-activated receptor (PPAR)-γ, and several PPAR-γ target genes. TGF-β also increased PPAR-γ protein expression via activation of the Alk5 receptor to increase PPAR-γ activity. Preventing PPAR-γ activation markedly impaired macrophage phagocytosis. TGF-β repolarized macrophages from animals lacking the soluble epoxide hydrolase (sEH); however, it responded differently and expressed lower levels of PPAR-γ-regulated genes. The sEH substrate 11,12-epoxyeicosatrienoic acid (EET), which was previously reported to activate PPAR-γ, was elevated in cells from sEH−/− mice. However, 11,12-EET prevented the TGF-β-induced increase in PPAR-γ levels and activity, at least partly by promoting proteasomal degradation of the transcription factor. This mechanism is likely to underlie the impact of 11,12-EET on macrophage activation and the resolution of inflammation. ## 1. Introduction The recruitment of neutrophils and monocytes to inflamed tissue and their differentiation into macrophages is a crucial step in the inflammatory process. However, once the neutrophil respiratory burst subsides, these and other cells, i.e., macrophages, eosinophils and lymphocytes, need to be removed to restore homeostasis [1]. To support the removal of apoptotic cells and tissue debris (efferocytosis), macrophage function is altered and the cells are reprogramed into a pro-resolving phenotype. Polarized macrophages are frequently broadly classified in two main groups, i.e., classically activated (M1) macrophages which are induced by T-helper 1 (Th-1) cytokines, i.e., the combination of bacterial lipopolysaccharide (LPS) and interferon γ (IFN-γ), and alternatively activated (M2) macrophages that have a pro-resolving and pro-angiogenic phenotype, and are induced by Th-2 cytokines [2,3]. The latter group can be further subdivided into more refined phenotypes: M2a, M2b, M2c, and M2d depending on the use of different stimuli such as interleukin (IL)-4 (M2a) or transforming growth factor β (TGF-β) (M2c). However, the phenotypic characterization of macrophages is highly complicated and there are many more distinct genetic fingerprints and metabolic states than are reflected in a basic M0/M1/M2 classification [4,5,6]. Indeed, additional subtypes have been identified such as macrophages stimulated by oxidized phospholipids, oxidized LDL, or hemoglobin [3]. TGF-β is a master immune regulator and checkpoint that has a major impact on immune suppression within the tumor microenvironment [7]. It has also been implicated in poor responsiveness to cancer immunotherapy [8]. In inflamed tissues, macrophage TGF-β synthesis is stimulated by the uptake of apoptotic cells, a step that is essential for the repolarization of pro-inflammatory macrophages into a pro-resolving phenotype (for reviews see [9,10]). Although endothelial TGF-β signaling drives endothelial-to-mesenchymal transition and vascular inflammation [11], there is some controversy about the exact impact of TGF-β on atherogenesis. Rather than promoting vascular inflammation, there is evidence suggesting that TGF-β signaling plays an important role in the protection against excessive plaque inflammation, loss of collagen content, and induction of regulatory immunity (reviewed by [12,13]). The current study set out to determine changes in macrophage gene expression associated with the repolarization of classically activated (M1) macrophages into a pro-resolving phenotype by TGF-β. ## 2.1. Animals C57BL/6N mice (6–8 weeks old) were purchased from Charles River (Sulzfeld, Germany). Floxed sEH mice (Ephx2tm1.1Arte) were generated in the C57BL/6N background by TaconicArtemis GmbH (Cologne, Germany) and crossed with Gt(ROSA)26Sortm16(Cre)Arte mice (TaconicArtemis) expressing Cre under the control of the endogenous Gt(ROSA)26Sor promoter to generate mice globally lacking sEH (sEH−/−) as described [14]. Age-, gender- and strain-matched mice were used throughout, where possible littermates were used. In cases where studying littermates was not possible, cells were isolated from age-matched C57Bl/6N mice. Preliminary experiments revealed that responses were comparable in cells from C57Bl/6N and Cre-sEHflox/flox mice and different from those of the sEH−/− (Cre+ sEHflox/flox) mice. For the isolation of bone marrow, mice were sacrificed using $4\%$ isoflurane in air and cervical dislocation. ## 2.2. Monocyte Isolation and Macrophage Polarization Murine monocytes were isolated from the bone marrow of 8–10-week-old mice and differentiated to naïve (M0) macrophages in RPMI 1640 medium (Invitrogen; Darmstadt, Germany), containing $8\%$ heat inactivated FCS supplemented with M-CSF (15 ng/mL, Peprotech, Hamburg, Germany) and GM-CSF (15 ng/mL, Peprotech, Hamburg, Germany) for 7 days. Cells were kept in a humidified incubator at 37 °C containing $5\%$ CO2. Thereafter M0 macrophages were polarized to classical activated M1 macrophages by treating with LPS (10 ng/mL; Sigma-Aldrich, Munich, Germany) and IFN-γ (1 ng/mL; Peprotech, Hamburg, Germany) for 12 h. Pro-resolving M2c macrophages were repolarized from M1 macrophages by the addition of TGF-β1 (10 ng/mL; Peprotech, Hamburg) for 48 h, as described [6]. ## 2.3. RNA Isolation and Quantitative Real Time PCR (RT-qPCR) Total RNA was extracted and purified from murine macrophages using Tri Reagent (ThermoFisher Scientific, Karlsruhe, Germany) based on the manufacturer’s instructions. Thereafter, RNA was eluted in nuclease-free water, and its concentration was determined (λ260 nm) using a NanoDrop ND-1000 (ThermoFischer Scientific, Karlsruhe, Germany). For the generation of complementary DNA (cDNA), total RNA (500 ng) was reverse transcribed using SuperScript IV (ThermoFischer Scientific, Karlsruhe, Germany) with random hexamer primers (Promega, Madison, WI, USA). Quantitative PCR was performed using SYBR green master mix (Biozym, Hessisch Oldendorf, Germany) and appropriate primers (Table 1) in a MIC-RUN quantitative PCR system (Bio Molecular Systems, Upper Coomera, Australia). Relative RNA levels were determined using a serial dilution of a positive control. The data are shown relative to the mean of the housekeeping gene 18S RNA. ## 2.4. RNA Sequencing Total RNA was isolated from macrophages by using RNeasy Micro kit (Qiagen, Hilden, Germany) based on manufacturer’s instructions. The RNA concentrations were determined by using NanoDrop ND-1000 (ThermoFischer Scientific, Karlsruhe, Germany; λ 260 nm). Total RNA (1 µg) was used as input for SMARTer Stranded Total RNA Sample Prep Kit-HI Mammalian (Takara Bio, Kyoto, Japan). Trimmomatic version 0.39 was employed to trim reads after a quality drop below a mean of Q20 in a window of 20 nucleotides and keeping only filtered reads longer than 15 nucleotides [15]. Reads were aligned versus Ensembl mouse genome version mm10 (Ensembl release 101) with STAR 2.7.10a [16]. Aligned reads were filtered to remove: duplicates with Picard 2.25.5 (Picard: A set of tools (in Java) for working with next generation sequencing data in the BAM format), multi-mapping, ribosomal, or mitochondrial reads. Gene counts were established with featureCounts 2.0.2 by aggregating reads overlapping exons on the correct strand excluding those overlapping multiple genes [17]. The raw count matrix was normalized with DESeq2 version 1.30.1 [18]. Contrasts were created with DESeq2 based on the raw count matrix. Genes were classified as significantly differentially expressed at average count >5, multiple testing adjusted p-value < 0.05, and log2FC > 0.585 or <−0.585. The Ensemble annotation was enriched with UniProt data [19]. The PCA, volcano plots and pathway enrichment analysis were generated using http://www.bioinformatics.com.cn/srplot, an online platform for data analysis and visualization. ## 2.5. Phagocytosis Assays M1 polarized macrophages were treated with either solvent or the PPAR-γ antagonist; GW9662 (10 µmol/L, Merck, Darmstadt, Germany), 2 h prior to repolarization to the M2c phenotype using TGF-β1. Thereafter, cells were incubated in RPMI medium supplement with $0.1\%$ BSA (37 °C, $5\%$ CO2) and containing pHrodo Red Zymosan bioparticles (10 μg/mL, Invitrogen). After 30 min the cells were washed to remove nonphagocytosed material and zymosan uptake was visualized and quantified using an automated live cell imaging system (IncuCyte, Sartorius, Göttingen, Germany). ## 2.6. PPAR-γ Activity PPAR-γ activity was measured using a luciferase construct (PPRE-X3-Luc, Addgene No. 1015) which contains 3 response elements (AGGACAAAGGTCA) upstream of a luciferase reporter [20]. For transfection, M0 macrophages were incubated in RPMI medium containing $0.1\%$ BSA for 2 h prior to the addition of plasmid (100 ng/mL) and Lipofectamin 3000 Transfection Reagent (ThermoFischer Scientific, Karlsruhe, Germany) according to the manufacturer’s instructions. After 24 h, the cells were polarized to M1 and M2c macrophages and stimulated as described in the results section. Luciferase activity was measured 48 h after cell polarization or stimulation with 11,12-EET (1 µmol/L, Cayman Europe, Tallinn, Estonia) using a commercially available kit (ONE-Glo Luciferase Assay System, Promega, Walldorf, Germany). ## 2.7. Immunoblotting Cells were lysed in RIPA lysis buffer (50 mmol/L Tris/HCL pH 7.5, 150 mmol/L NaCl, 10 mmol/L NaPPi, 20 mmol/L NaF, $1\%$ sodium deoxycholate, $1\%$ Triton and $0.1\%$ SDS) enriched with protease and phosphatase inhibitors and detergent-soluble proteins were resuspended in SDS-PAGE sample buffer. Samples were separated by SDS-PAGE and subjected to Western blotting as described [21]. Membranes were blocked in $3\%$ BSA, incubated with primary antibodies in the blocking solution and horseradish peroxidase-conjugated secondary antibodies. Protein bands were visualized using Lumi-*Light plus* Western blotting substrate (Roche, Mannheim, Germany) and captured by an image acquisition system (Fusion FX7; Vilber-Lourmat, Torcy, France). The antibody used to identify PPAR-γ was from Santa Cruz (Texas, USA; Cat. # sc-7196, 1:1000), anti-non muscle myosin was from abcam (Berlin, Germany; Cat. # ab75590, 1:1000), and the anti β-actin antibody was from Linaris (Eching, Germany; Cat. # MAK6019, 1:3000). The secondary antibodies were used were: goat anti-rabbit IgG H and L chain specific peroxidase conjugate, and a goat anti-mouse IgG, H and L chain specific peroxidase conjugate (both 1:20,000; Cat. # 401393 and Cat. # 401253, Merck). ## 2.8. Statistical Analyses Data are expressed as mean ± SEM. Statistical analysis was performed using Student’s t test, or two-way ANOVA with a Tukey’s or Sidak’s post-test. Normalized data were compared using the Kruskal–Wallis rank sum test or Kruskal–Wallis test followed and Dunn’s multiple comparison test (using Prism 9.0.2, GraphPad Software Inc., San Diego, CA, USA) as indicated in the figure legends. Values of $p \leq 0.05$ were considered statistically significant. ## 2.9. Data and Material Availability All data associated used this study are present in the paper or the Supplementary Materials. ## 3.1. Impact of TGF-β-Induced Macrophage Repolarization on Gene Expression Bone marrow-derived monocytes were isolated from wild-type mice and differentiated to naïve (M0) macrophages in the presence of M-CSF and GM-CSF for 7 days. Thereafter, M0 macrophages were either polarized to classically activated (M1) macrophages by adding lipopolysaccharide (LPS) and interferon (IFN)-γ for 12 h or into pro-resolving M2c by treating M1 macrophages with TGF-β1 for 48 h. RNA-sequencing (RNA-seq) was then performed to identify changes in gene expression associated with macrophage polarization. Principal component analysis (PCA) confirmed that the three groups of macrophages clustered together with clear differences between the polarization types (Figure 1A, Table S1). As expected, the expression of the classical M1 marker genes Nos2, Ptgs2, Il1b, and Nlrp3 were significantly higher in M1 versus M2c polarized macrophages. On the other hand, the typical M2/M2c markers, i.e., Arg1, Vegfa were higher in M2c than in M1 polarized macrophages (Figure 1B). A closer analysis of the genes differentially expressed in M2c versus M1-polarized macrophages revealed additional marked differences, with TGF-β inducing the upregulation of 2952 genes and the downregulation of 2051 genes, including the pro-inflammatory genes Cxcr4, Ptgs2 and Angptl4. One of the genes whose expression was significantly increased in M2c macrophages was Pparg and gene set enrichment analysis identified changes in the expression of several targets of the peroxisome proliferator-activated receptor (PPAR) family of transcription factors (Figure 1C). PPAR-γ-regulated genes induced by TGF-β included Angptl4, Abcd2, Eepd1 and Tmem8. ## 3.2. TGF-β-induced M2c Macrophage Polarization Relies on PPAR-γ and Alk5 Activation To determine the importance of PPAR-γ on the regulation of selected macrophage genes, we determined the impact of the PPAR-γ antagonist GW9662 on the expression of three selected genes in M2c macrophages, i.e., Cxcr4 (higher in M2c), as well as Ptgs2 and Ptx3 (both higher in M1). While there was no significant effect of PPAR-γ antagonism on Cxcr4 expression, cells treated with GW9662 expressed significantly higher levels of Ptgs2 and Ptx3 than cells treated with solvent (Figure 2A). One characteristic of the latter cells is their ability to phagocytose cell debris. While M2c polarized murine macrophages effectively phagocytosed zymosan, particle uptake was clearly reduced in cells treated with the PPAR-γ antagonist (Figure 2B). These observations imply that PPAR-γ activation is required for the down regulation of some pro-inflammatory genes as well as to support the induction of a pro-resolving phenotype by TGF-β. Consistent with the latter observations, PPAR-γ expression was significantly elevated in M2c versus M1 or M0 macrophages (Figure 3A). Given that M2c polarization was induced by adding TGF-β to M1 polarized macrophages, we determined which TGF-β type I receptor, i.e., activin receptor-like kinase (Alk) 1 or Alk5, mediated the TGF-β-induced increase in PPAR-γ levels. While neither solvent, nor the Alk1 inhibitor; LDN193189 prevented the TGF-β-induced increase in PPAR-γ (Figure 3B), the response was abolished in macrophages pretreated with the Alk5 inhibitor; SD208. ## 3.3. PPARγ Activity in Differentially Polarized Macrophages from Wild-Type and sEH−/− Mice Next, we set out to determine whether or not mediators known to regulate PPAR-γ were implicated in the TGF-β-induced changes in PPAR levels and gene expression. Given that arachidonic acid metabolism was one of the pathways altered by TGF-β (see Figure 1C), we focused on the role of the potential role of arachidonic acid epoxides. These fatty acid mediators; such as 11,12-epoxyeicosatrienoic acid (11,12-EET), are reported to activate PPAR-γ [22,23,24,25,26], and their cellular levels are largely determined by the activity of the soluble epoxide hydrolase (sEH). Therefore, a luciferase construct containing three PPAR-γ responsive elements was expressed in macrophages from wild-type mice that were then polarized to the M1 and M2c phenotypes. Consistent with the increase in PPAR-γ protein levels, luciferase activity was clearly increased in the M2c macrophages from wild-type mice (Figure 4A). Deletion of the sEH significantly blunted the latter response, which was reflected in the differential expression of PPAR-γ-regulated genes in M2c macrophages from the two genotypes (Figure 4B, Table S2). Indeed, the well-characterized PPAR-γ-regulated genes Gipr, Vldlr, and Rbp1 were all expressed at significantly lower levels in M2c macrophages from sEH−/− versus wild-type mice. A series of fatty acid epoxides are metabolized by the sEH and it was possible to demonstrate higher 11,12-EET and lower levels of its sEH-generated diol; 11,12-dihydroxyeicosatrienoic acid (11,12-DHET), in M2c polarized macrophages from sEH−/− versus wild-type mice (Figure 4C). Moreover, treating M1 polarized macrophages from wild-type mice with 11,12-EET prior to the repolarization with TGF-β, also decreased PPAR-γ activity (Figure 4D). ## 3.4. Regulation of PPAR-γ Levels by 11,12-EET Comparison of the effects of 11,12-EET versus those of its diol; 11,12-DHET on PPAR-γ protein levels were assessed next. This revealed that the sEH substrate; 11,12-EET, effectively prevented the TGF-β-induced increase in PPAR-γ protein levels in murine macrophages (Figure 5A). 11,12-DHET had no effect. Somewhat unexpectedly, 11,12-EET altered PPAR-γ protein levels without altering Pparg expression (Figure 5B) indicating that 11,12-EET may affect the stability of the PPAR-γ protein. At least in adipocytes, ligand-dependent PPAR-γ activation is associated with its subsequent proteasomal degradation [27]. To determine whether or not 11,12-EET decreased PPAR-γ levels by stimulating its proteasomal degradation, experiments were performed in the absence and presence of the proteasome inhibitor MG132. As before, 11,12-EET, but not 11,12-DHET, decreased PPAR-γ protein levels in M2c polarized macrophages and proteasome inhibition prevented the effect (Figure 5C). ## 4. Discussion The results of this investigation revealed that the TGF-β-dependent repolarization of classically activated (M1) macrophages into a pro-resolving, highly phagocytic phenotype (M2c), relies on the increased expression and activation of PPAR-γ. Deletion of the sEH, to increase cellular levels of fatty acid epoxides, largely prevented TGF-β-induced changes in macrophage gene expression as well as PPAR-γ activation. The effect seen in macrophages from sEH−/− was reproduced in cells from wild-type mice treated with the sEH substrate 11,12-EET and was attributed, at least in part, to the accelerated proteasomal degradation of PPAR-γ. In our study, we set out to determine changes in macrophage gene expression associated with the repolarization of classically activated (M1) macrophages into a pro-resolving phenotype by TGF-β. It is not surprising that repolarization resulted in marked alterations in macrophage gene expression and a decrease in the expression of pro-inflammatory markers. However, the observation that many of the genes increased in TGF-β-treated macrophages were classical PPAR-γ targets, e.g., Abcd2, Eepd1, and Tmem8 was unexpected as TGF-β is a multifunctional cytokine that drives inflammation, fibrosis and cell differentiation, while PPAR-γ activation tends to promote the opposite effects [28]. The impact of TGF-β on gene expression was however consistent with its ability to increase PPAR-γ protein levels as well as transcription factor activity. The changes in gene expression were reflected in functional alterations as zymosan phagocytosis by TGF-β-repolarized macrophages was clearly attenuated in cells treated with a PPAR-γ inhibitor. Our results are consistent with recent reports from other groups that linked the actions of TGF-β with the activation of PPAR-γ signaling (reviewed by [29]). For example, TGF-β signaling and the upregulation of PPAR-γ was reported to be essential for the development and homeostasis of alveolar macrophages [30]. On the other hand, PPAR-γ was reported to interact with Stat3 and Smad3 to interfere with TGF-β signaling and account for the functional antagonism between BMP2 and TGF-β1 pathways in vascular smooth muscle cells [31]. Thus, it seems likely that a complex crosstalk exists between the two pathways. The results of our study also indicate that in macrophages, the TGF-β-induced increase in PPAR-γ expression relies on the activation of Alk5 and as such fits well with a previous report that TGF-β induces M2-like macrophage polarization via Snail-mediated suppression of a pro-inflammatory phenotype, as the induction of *Snail is* also mediated by Alk5 [20]. PPARs are ligand-inducible transcription factors and are considered important therapeutic targets as they exert anti-atherogenic and anti-inflammatory effects on the vascular wall and immune cells, as well as acting to reduce insulin resistance and dyslipidaemia [32]. However, unlike many receptors that possess a limited number of ligands, there are numerous natural PPAR-γ ligands, in particular mediators derived from polyunsaturated fatty acids [33]. The EETs are among the latter compounds and are generated by the sequential action of cytochrome P450 enzymes and the sEH [34]. These fatty acid mediators are particularly interesting given that their actions have been attributed to PPAR activation [22,23,24,25,26], and the inhibition or deletion of the sEH to increase EET levels has anti-atherosclerotic effects in mouse models [35,36]. In our study, we observed that the activity of PPAR-γ was lower in TGF-β-stimulated macrophages from sEH−/− (EET high) than from wild-type (EET low) mice. While these findings were consistent with the clearly decreased levels of PPAR-γ protein in sEH-deficient macrophages, they seemed to be a direct contradiction of previous reports. The timing of the experiments performed can go a long way to accounting for the observations made as PPAR-γ activity was generally assessed 48 h after TGF-β addition or stimulation with 11,12-EET. Thus, 11,12-EET probably initiates a transient increase in PPAR-γ activity that is terminated by an EET-stimulated pathway that results in PPAR-γ degradation. Given that PPAR-γ levels were not decreased by 11,12-EET in cells treated with MG 132 we propose that 11,12-EET can stimulate the proteasomal degradation of PPAR-γ. Certainly, PPAR-γ levels can be regulated by protein ubiquitination and degradation [27]. Which ubiquitin ligase was activated by 11,12-EET was not studied but there is circumstantial evidence to link 11,12-EET with increased ubiquitination as the cardiomyocyte-specific overexpression of CYP2J2, which generates 11,12-EET and has been reported to promote the ubiquitination of the pattern recognition receptor NLRX1 [37]. Taken together, our results indicate that macrophage levels of the sEH substrate; 11,12-EET, can modulate macrophage polarization by TGF-β, at least partly by promoting the ubiquitination and degradation of PPAR-γ. Given that sEH inhibition prevents the development of atherosclerosis in mice [35,36], and the conversion of inflammatory macrophages to the M2 phenotype drives atherosclerosis regression [38], it may be interesting to determine how much of the phenotype observed can be attributed to changes in PPAR-γ expression.
# Examining Factors Associated with Dynapenia/Sarcopenia in Patients with Schizophrenia: A Pilot Case-Control Study ## Abstract Sedentary behavior in patients with schizophrenia causes muscle weakness, is associated with a higher risk of metabolic syndrome, and contributes to mortality risk. This pilot case-control study aims to examine the associated factors for dynapenia/sarcopenia in patients with schizophrenia. The participants were 30 healthy individuals (healthy group) and 30 patients with schizophrenia (patient group), who were matched for age and sex. Descriptive statistics, Welch’s t-test, cross-tabulations, adjusted residuals, Fisher’s exact probability test (extended), and/or odds ratios (ORs) were calculated. In this study, dynapenia was significantly more prevalent in patients with schizophrenia than in healthy individuals. Regarding body water, Pearson’s chi-square value was 4.41 ($$p \leq 0.04$$), and significantly more patients with dynapenia were below the normal range. In particular, body water and dynapenia showed a significant association, with an OR = 3.42 and $95\%$ confidence interval [1.06, 11.09]. Notably, compared with participants of the healthy group, patients with schizophrenia were overweight, had less body water, and were at a higher risk for dynapenia. The impedance method and the digital grip dynamometer used in this study were simple and useful tools for evaluating muscle quality. To improve health conditions for patients with schizophrenia, additional attention should be paid to muscle weakness, nutritional status, and physical rehabilitation. ## 1. Introduction The prevalence of schizophrenia in *Japan is* estimated at $0.7\%$ [1]. Patients with schizophrenia die on average 10 to 20 years earlier than healthy individuals [2,3]. The sedentary lifestyle common among patients with schizophrenia is associated with higher metabolic syndrome (cardiovascular changes due to diabetes mellitus, hypertension, and hypercholesterolemia) and contributes to mortality risk. Lifestyles that increase the risk of such metabolic syndrome have been identified as lacking regular physical activity, poor food intake, substance use, and high rates of smoking [4]. Strassnig et al. [ 5] developed a comprehensive model to conceptualize multimodal relationships that predict impaired activities of daily living in patients with schizophrenia. According to these authors, limitations in physical abilities interfere with activities of daily living and elicit a state of physical infirmity observed in other chronic illnesses. A high prevalence of sarcopenic obesity has also been reported in patients with schizophrenia [6]. However, little is known on the risk factors for dynapenia/sarcopenia in patients with schizophrenia. Factors that contribute to severe limitations due to the pathophysiology of schizophrenia and the effects of medications are complex and include a sedentary lifestyle as well as factors due to the effects of medication therapy, which ultimately lead to a vicious cycle of obesity and cardiovascular metabolic risk [7]. A sedentary lifestyle and decreased functional motor skills in patients with schizophrenia reduce their quality of life [8]. Low physical activity levels are also associated with the use of antipsychotic drugs. This implies that increasing weight is related to limitations in physical functioning and restricts activities of daily living. Physical inactivity due to obesity also adds to the burden of schizophrenia in the form of reduced physical health-related quality of life. Rehabilitation programs focusing on these risk factors should be key for physical activity for both prevention and treatment of disease and disablement in patients with schizophrenia [9]. Moreover, antipsychotic medications have been associated with weight gain and obesity in schizophrenia. Patients with schizophrenia consume unhealthy food, and their dietary patterns identified a high consumption of saturated fat and low intake of fruit and dietary fiber [10]. In patients with schizophrenia, the ability to supply oxygen to muscles during exercise and the ability of muscles to consume oxygen (cardiopulmonary endurance) are poorer than in healthy individuals. This means the level of cardiorespiratory fitness may be extremely low in patients with schizophrenia, amounting to a state of deconditioning and a very low capacity for sustained physical activity that is high in intensity, activities promoting low to moderate activity levels may serve the population well and lead to highly relevant improvements in health prospects [11]. In a 12-year follow-up study of schizophrenia, the patient group had an excess of psychiatric and physical comorbidities (fractured neck of femur, parkinsonism, pneumonia, esophageal ulcer, respiratory failure, and bronchitis), including side effects of psychotropic drugs, compared to the age- and sex-matched controls. Specifically, their finding clearly demonstrates that parkinsonism-associated complications may play a dominant role in schizophrenia-related death in general hospitals. Reducing the risk of parkinsonism-associated complications due to accurate detection and management of side effects of psychotropic and somatic medication as well as of related drug–drug interactions, continuously monitoring physical status, and accurate detection of concomitant metabolic, cardiovascular, and respiratory diseases as well as creating awareness about preventive strategies for difficulty eating and aspiration pneumonia may help in reducing parkinsonism-associated fatal consequences in general hospitals in patients with schizophrenia [12]. In the context of socioeconomic challenges, schizophrenia leads to particularly unhealthy lifestyles that include poor diets, little exercise, marked sedentary behavior, and high rates of smoking with commensurately low physical activity levels [13]. As a result, compared to the general healthy population, patients with schizophrenia have severe symptomatic limitations in physical capacity. Negative symptoms reduce the likelihood of patients’ engagement in goal-directed behavior, including physical activity, which has been noted to increase obesity and cardiometabolic risk and induce poor physical conditions, resulting in sarcopenic obesity and muscle weakness [6]. According to the European working group on sarcopenia in older people 2 (EWGSOP 2), sarcopenia is suspected when [1] muscle weakness is confirmed, whereas sarcopenia is confirmed when [2] muscle mass or muscle quality decline is present in addition to muscle weakness [14]. As mentioned above, sarcopenia is defined as “loss of muscle mass or quality,” whereas dynapenia is defined as “loss of muscle strength” [15]. Sarcopenia is also associated with depressed mood, which in turn is associated with low muscle strength and physical performance [16]. Therefore, it is problematic that patients with schizophrenia frequently have negative symptoms such as depressed mood, which is associated with low physical function and low muscle strength. Regarding schizophrenia and nutritional status, Japanese inpatients with schizophrenia are more likely to be underweight and undernourished than outpatients [17]. Nutritional status is an issue for patients with schizophrenia in Japan. Therefore, it is important to consider the activities of daily living, dynapenia, sarcopenia/presarcopenia, and nutritional status when considering symptom management in hospitalized patients with chronic mental illness. This pilot study aimed to examine the associated factors for dynapenia/sarcopenia in patients with schizophrenia. ## 2.1. Study Participants This pilot case-control study enrolled 60 individuals in total comprising 30 healthy participants (healthy group) and 30 patients with schizophrenia (patient group), matched by age, ranging from 40 to 89 years, and sex. ## 2.2. Data Acquisition Period The study’s data acquisition phase was from 17 August 2021 to 30 November 2021. ## 2.3. Target Selection Criteria Healthy group: Employees working at Hospital A and its Geriatric Health Care Facility. Patient group: Patients with schizophrenia admitted to Hospital A. Both groups were matched by sex and age. ## 2.4. Exclusion Criteria Healthy group: Individuals with a mental or physical disorder. Patient group: Individuals unable to understand instructions owing to a medical condition or medication status or because of a physical disorder such as a history of cerebrovascular disease such as stroke or a neurological disease. ## 2.5.1. Body Mass Tanita monitors use the latest bioelectrical impedance analysis technology, first developed by Tanita in 1992, to provide fast and accurate body composition results [18]. The RD-545 InnerScan Pro provides an in-depth analysis of 26 body composition measurements. The measurements included weight, body fat, muscle mass, muscle quality score, body mass rating, bone mass, visceral fat level, basal metabolic rate, metabolic age, total body water, and body mass index (BMI). The RD-545 InnerScan PRO can perform fat and muscle analysis individually for arm, leg, and trunk segments if hand electrodes are used [19]. The state of visceral fat accumulation is indicated as visceral fat level score measured by the RD-545 InnerScan Pro. ## 2.5.2. Age, Height, and Weight Healthy group: Age and height were self-reported based on the hospital’s staff health examination form. Patient group: Age and height were obtained from the medical records. Weight was measured in both groups using a scale (RD-545 InnerScan Pro, TANITA Corporation. Tokyo, Japan). ## 2.5.3. Grip Strength of the Hands A digital grip dynamometer (T.K.K.5401; Takei Scientific Instruments, Co., Ltd., Niigata, Japan) was used to individually measure the grip strength of each hand in a stable standing posture. ## 2.5.4. Skeletal Muscle Mass Index (SMI) The total limb skeletal muscle mass (kg) was calculated from the information obtained from the body mass, and the data were divided by the square of the corresponding height (m2). ## 2.5.5. SARC-F Score The SARC-F score was presented by Morley as a screening tool for sarcopenia at the EU/US committee on sarcopenia in the frail elderly at the Conference on Sarcopenia Research (ICSR) in Orlando in 2012 [20]. Data were self-reported by all participants using a questionnaire survey. ## 2.6. Sarcopenia/Dynapenia Assessment Method This study adopted the diagnostic criteria proposed by the Asian Working Group for Sarcopenia (AWGS) [21]. The SARC-F score, grip strength, and skeletal muscle mass were used as indicators. The specific criteria were as follows: Grip strength can be used to assess muscle weakness [20,21]. Peripheral quantitative computed tomography, dual X-ray energy absorptiometry, and magnetic resonance imaging techniques can be used to assess skeletal muscle mass and quality [22]. Other than the aforementioned methods, bioelectrical impedance analysis can be used, which has the advantage of being inexpensive and portable. The cutoff values for sarcopenia in the Japanese population are 6.8 kg/m2 for men and 5.7 kg/m2 for women [23]. The SARC-F score was used to select participants with sarcopenia; those with a score of 4 or more points were selected. Based on the two sarcopenia criteria outlined in the Section 1, muscle weakness and loss of muscle mass or muscle quality were evaluated. [ 1] Grip strength was used as an indicator of muscle weakness, defined for men and women as having a grip strength of less than 26 kg and less than 18 kg, respectively. In addition, [2] skeletal muscle mass (kg/m2) was used as an indicator of muscle mass or muscle quality loss, and skeletal muscle mass loss was defined as a value less than 7.0 kg/m2 for men and less than 5.7 kg/m2 for women. Presarcopenia was defined as reduced skeletal muscle mass and normal grip strength. Dynapenia was defined as a normal skeletal muscle mass and decreased grip strength. ## 2.7. Statistical Analysis Basic statistical parameters (mean ± standard deviation [SD], $95\%$ confidence interval [CI]) were calculated. Welch’s t-test was performed to compare the two study groups. For items that were significantly different between the two groups, cross-tabulations were performed, and adjusted residuals were calculated. Fisher’s exact probability test (Extended), Pearson’s chi-square test, and/or odds ratios (ORs) were calculated. All statistical analyses were performed using SPSS 21.0 (IBM Corporation). Statistical significance was set at $p \leq 0.05.$ ## 3. Results Among the study participants, $61.7\%$ ($\frac{37}{60}$) were women and $38.8\%$ ($\frac{23}{60}$) were men. The healthy group comprised $63.3\%$ ($\frac{19}{30}$) women and $36.7\%$ ($\frac{11}{30}$) men, whereas the patient group consisted of $60.0\%$ ($\frac{18}{30}$) women and $40.0\%$ ($\frac{12}{30}$) men. In this study, dynapenia and sarcopenia/presarcopenia were assessed. Among the 30 participants in the patient group, $10.0\%$ ($\frac{3}{30}$ [$\frac{1}{18}$ women, $\frac{2}{12}$ men]) met the criteria for sarcopenia, $3.3\%$ ($\frac{1}{30}$ [$\frac{1}{12}$ men]) for presarcopenia, and $60.0\%$ ($\frac{18}{30}$ [$\frac{14}{18}$ women, $\frac{4}{12}$ men]) for dynapenia. The corresponding results of the healthy group showed that sarcopenia and presarcopenia were not present ($0\%$) and that $13.3\%$ ($\frac{4}{30}$ [$\frac{3}{19}$ women, $\frac{1}{11}$ men]) met the criteria for dynapenia. Table 1 shows the results of Welch’s t-test. Body water content was significantly higher in the healthy group with 53.56 ± $3.94\%$ in the healthy group and 49.77 ± $6.58\%$ in the patient group ($t = 2.71$, $p \leq 0.001$). The visceral fat level score was 6.60 ± 3.71 in the healthy group and 9.12 ± 5.35 in the patient group ($t = 2.11$, $$p \leq 0.04$$). Body fat content was 24.95 ± $6.05\%$ in the healthy group and 30.41 ± $9.00\%$ in the patient group ($t = 2.76$, $p \leq 0.01$). Likewise, BMI was 21.89 ± 2.30 kg/m2 for the healthy group and 23.88 ± 4.65 kg/m2 for the patient group ($t = 2.10$, $$p \leq 0.04$$). Left grip strength was 29.16 ± 9.07 kg for the healthy group and 18.53 ± 8.38 kg for the patient group ($t = 4.71$, $p \leq 0.001$), whereas right grip strength was 30.05 ± 7.98 kg for the healthy group and 21.26 ± 10.92 kg for the patient group ($t = 3.56$, $p \leq 0.001$). These findings showed that for both sides, the grip strength of the patient group was significantly weaker than that in the healthy group. As shown in Table 2, the patient group was significantly more likely to have dynapenia or sarcopenia/presarcopenia (Fisher’s exact test, $p \leq 0.0001$; OR, 17.88; $95\%$ CI [4.74, 67.43]). The association of the study group with dynapenia, including sarcopenia and presarcopenia (hereafter referred to as dynapenia in the Section 3), was analyzed based on items with significant differences in Table 1. No significant association was found for the parameters of visceral fat level score, body fat, and BMI. In contrast, for body water, the result of Pearson’s chi-square test was 4.41 ($$p \leq 0.04$$), and significantly more people with dynapenia were below the normal range. We also confirmed a significant association for body water (OR, 3.42, $95\%$ CI [1.06, 11.09]). ## 4. Discussion As shown in Table 1, the patient group had significantly higher body fat, visceral fat level scores, and BMI. In addition, the average value of the patient group BMI is not at the obese level, and the high visceral fat level score was deemed a problem when considered overall from the cross-tabulation results in Table 2. As shown in Table 2, the patient group had a high percentage of individuals diagnosed with dynapenia, with an OR of 17.88 times the risk of developing the disease compared with healthy individuals. Thus, it was suggested that being afflicted with schizophrenia is one factor associated with dynapenia. Moreover, Table 2 shows that no significant association by the study group was found for body fat, visceral fat level score, or BMI; however, body water content was significantly associated, with the OR indicating 3.42 times higher risk of dynapenia for the patient group than for the healthy group. For these reasons, the patient group in this study may have increased fat, as well as decreased body water content and muscle mass, owing to a sedentary lifestyle [9,24]. Sex differences in body fat and water content in patients with schizophrenia have been reported [6]. The body water content was predominantly higher in the healthy group. Body water refers to water contained in various body compartments, including blood, lymphatic fluid, extracellular fluid, and intracellular fluid [25]. These fluids play important roles in the body, such as transporting nutrients and maintaining a constant body temperature, and they tend to decrease with age. In addition, people with high body fat tend to have a lower body water content [26]. This trend is also consistent with the previous study by Bulbul et al. [ 27] Therefore, it is necessary to focus on the trends of high body fat and low water content in the patient group. Of the 307 participants in the study by Mori et al. [ 28], $60.9\%$ were assessed as normal, and $25.7\%$, $8.1\%$, and $5.2\%$ were found to have presarcopenia, sarcopenia, and dynapenia, respectively. Reduced grip strength is a critical indicator of dynapenia [29]. In this study, grip strength was significantly lower in patients with schizophrenia than in healthy individuals. Because many patients with schizophrenia have dynapenia, grip strength may be a convenient screening index for dynapenia in psychiatric hospitals. The participants of the study by Kobayashi et al. were volunteers aged over 60 years who were in good general health [30]. Their study found that in Japan, the rates of sarcopenia, presarcopenia, and dynapenia were $10\%$, $22\%$, and $8\%$ in men, and $19\%$, $23\%$, and $13\%$ in women. According to Neves et al., sarcopenia and dynapenia were identified in $15.3\%$ and $38.2\%$ of old persons [31]. In this study, $13.3\%$ of the healthy individuals had dynapenia, whereas $60.0\%$ of the patient group had dynapenia, $10.0\%$ had sarcopenia, and $3.3\%$ had presarcopenia. Thus, our data suggest that the prevalence of dynapenia is high among patients with schizophrenia. Appetite regulation and physical activity affect energy balance and changes in body fat mass. In some patients, inflammation induces anorexia and fat loss along with sarcopenia. In others, appetite is maintained, despite the activation of systemic inflammation, leading to sarcopenia with normal or increased BMI. Inactivity contributes to sarcopenia and increased fat tissue in aging and disease [32]. In a previous study of the BMI status of hospitalized *Japanese schizophrenia* patients, underweight and obesity were characteristic in schizophrenia inpatients compared with the general population. In particular, regarding the characteristics of underweight, a previous study showed that the prevalence of hypotriglyceridemia was significantly higher in the underweight group than in the normal weight group and in overweight/obese schizophrenia inpatients [33]. Harvey and Strassnig [34] suggested that the cognitive limitations of people with schizophrenia not only correlate with disability directly, but contribute substantially to other skills deficits (functional capacity; social competence) that exacerbate disability outcomes. Impaired cognition and negative symptoms, particularly in the domains of reasoning and problem solving and reinforcement valuation, can lead to deficits in functional capacity that then lead to poor dietary and exercise choices, contributing to poor functional outcomes. In another study, age, certification of long-term care, and malnutrition were identified as risk factors for sarcopenia [35]. Sarcopenia is thought to primarily explain the age-related loss of muscle strength, such as dynapenia, commonly seen in older people [36]. However, recent longitudinal data indicate that the loss of muscle strength occurs significantly faster than the accompanying loss of muscle mass [37]. On the other hand, gains in muscle mass and strength afforded by resistive training are associated with a small but significant improvement in physical performance. It is noteworthy that lower intensity mechanical loading such as aerobic exercise, despite being considerably less effective for inducing muscle hypertrophy, has been found to promote protein synthesis and expression of growth-related genes and inhibit the expression of muscle breakdown-related genes [37]. Muscle weakness is known to decrease physical function and increase the risk of mortality [38]. Regarding the changes in physical function associated with aging, muscle strength declines by $30\%$ and muscle area by $40\%$ between 20 and 70 years of age [39]. At the age of 75 years, muscle strength declines at a rate of 2.5–$3\%$ per year for women and 3–$4\%$ per year for men, and muscle mass is lost at a rate of 0.64–$0.70\%$ per year for women and 0.80–$0.98\%$ per year for men [37]. Kitamura, et al. [ 40] found sex-specific patterns of correlates with sarcopenia. Significant sarcopenia-related factors in addition to ageing were hypoalbuminaemia, cognitive impairment, low activity, and recent hospitalization among men and cognitive impairment and depressed mood among women. It is important to focus on these conditions. Compared to young adults, older adults have a lower limb skeletal muscle index (ASMI, kg/m2) and a significantly higher body fat percentage [41]. It has been noted that diabetic patients with a high body fat percentage in addition to low BMI may develop sarcopenia [42]. Moreover, the prevalence of diabetes in patients with schizophrenia in Japan has been reported to be $8.6\%$ [43]. Protein intake is necessary for efficient muscle growth. A person with adequate muscle mass needs 1.0–1.2 g protein per kg of body weight per day for an older person to maintain muscle mass, i.e., about 60–72 g per day if the person weighs 60 kg [44]. However, this intake is not sufficient for those who must gain muscle mass due to sarcopenia, and they should have an intake of 1.2–1.5 g of protein per kg of body weight per day, i.e., 72–90 g per day if they weigh 60 kg [45]. Thus, it is important to control the balance of restricted caloric intake with guaranteed protein intake for patients with dynapenia. However, if a patient has kidney problems, it is critical to pay much more attention to an appropriate protein and calorie intake during the rehabilitation process [46]. Based on the BMI findings of our study, the patient group was not underweight. Our study subjects were inpatients; they have consumed a diet regulated by a psychiatrist and a dietitian. However, outpatients may not be eating an appropriate diet due to unbalanced diets, poverty, etc. With this in mind, we should conduct the main case-control study following this pilot study. Furthermore, inpatients may have a lower average BMI than outpatients, who are free to eat whatever they want at home because their food intake is controlled to prevent excessive weight gain. It was considered important to keep these points in mind when managing their health. ## Limitations and Future Research Since data on daily intakes, such as nutritional status, were not obtained in this study, it is necessary to obtain data on “official” caloric intake based on hospital diets, such as daily caloric intake, for better analyses in future studies. Additionally, it is necessary to consider “unofficial” caloric intake, such as snacks. Moreover, the patient’s amount of activity needs to be considered. This pilot study was a small-scale study conducted to inform, predict, and direct an intended future full-scale study. The association of low body water and dynapenia in patient participants suggests that low body water might be a risk factor for dynapenia in these patients. Underweight is highly prevalent in Japanese inpatients with schizophrenia. Psychiatrists should be aware of underweight and their potential health risks. Treating psychiatrists should also be responsible for providing any necessary nutritional interventions [47]. Physical health appears to be achievable in people with schizophrenia being challenged by motivational difficulties with attending regular exercise and have beneficial implications for physical function during activities of daily living, lifestyle-related diseases, and early death. Specifically, physical training is an effective countermeasure to improve the low aerobic endurance and skeletal muscle strength in these patients [48]. Furthermore, the main study following this pilot study should include not only body composition (low body water, visceral fat level, and muscle mass), grip strength, and joint range of motion, but also medication content, heart rate variability, and motor velocity [49]. Other factors (physical function during activities of daily living, gait and psychiatric symptoms specific to schizophrenia, age, and length of hospitalization) also must be considered in dynapenia in patients with schizophrenia. ## 5. Conclusions This pilot study examined the risk factors for dynapenia/sarcopenia in patients with schizophrenia. Patients with schizophrenia were overweight, had less body water than the healthy study participants, and were at a higher risk of dynapenia than participants in the healthy group. The impedance method used in this study is a simple and useful method for evaluating muscle quality in conditions such as dynapenia. To improve health conditions for patients with schizophrenia, additional attention should be paid to muscle weakness, nutritional status, and physical rehabilitation. Future research will include a larger study following on this pilot study.
# Validation of an LC-MS/MS Method for the Determination of Abscisic Acid Concentration in a Real-World Setting ## Abstract One of the most relevant aspects in evaluating the impact of natural bioactive compounds on human health is the assessment of their bioavailability. In this regard, abscisic acid (ABA) has attracted particular interest as a plant-derived molecule mainly involved in the regulation of plant physiology. Remarkably, ABA was also found in mammals as an endogenous hormone involved in the upstream control of glucose homeostasis, as evidenced by its increase after glucose load. The present work focused on the development and validation of a method for the determination of ABA in biological samples through liquid–liquid extraction (LLE), followed by liquid mass spectrometry (LC-MS) of the extract. To test method suitability, this optimized and validated method was applied to a pilot study on eight healthy volunteers’ serum levels to evaluate ABA concentration after consumption of a standardized test meal (STM) and the administration of an ABA-rich nutraceutical product. The results obtained could meet the demands of clinical laboratories to determine the response to a glucose-containing meal in terms of ABA concentration. Interestingly, the detection of this endogenous hormone in such a real-world setting could represent a useful tool to investigate the occurrence of impaired ABA release in dysglycemic individuals and to monitor its eventual improvement in response to chronic nutraceutical supplementation. ## 1. Introduction 2-cis, 4-trans-abscisic acid (ABA) is a sesquiterpenoid phytohormone synthesized via an indirect pathway from the cleavage products of carotenoids [1]. This molecule has been studied for several decades with regard to its pivotal role as a regulator of plant growth and response to abiotic and biotic stress [2,3]. Due to ABA involvement as a growth regulator, immature fruits have been found to contain the highest concentration of this phytohormone [4,5] in the context of vegetal matrices. In this regard, a screening of various immature fruits derived from fruit thinning has identified thinned nectarines (TN) as the richest source of this bioactive compound [6]. Nevertheless, ABA has sparked particular interest not only as a phytohormone commonly found in vegetables and fruits, but it has also been found in mammals as an endogenous hormone involved in the upstream control of glucose homeostasis [7,8,9] via interaction with its specific receptor lanthionine synthase C-like 2 (LANCL2) [10]. To date, the majority of evidence for the hypoglycemic effects of ABA in vivo has addressed a role in the stimulation of peripheral glucose uptake by increasing the expression and translocation of glucose transporter 4 (GLUT4) [11,12,13,14]. In addition, it is noteworthy to remark that in patients with type 2 diabetes mellitus (T2DM) or gestational diabetes, a decreased release of ABA have been found following a glucose load [15]. This evidence further strengthens the importance of monitoring serum concentrations of ABA in individuals with altered glucose metabolism and supplementing them with plant-based exogenous sources of ABA. In this context, several studies involving both animal and human models demonstrated the significant beneficial effects of ABA-containing nutraceuticals on the glycemic profile in prediabetic and diabetic subjects, in association with an insulin-sparing mechanism of action [6,14,16,17,18,19]. In virtue of its insulin-independent mechanism of action [13], ABA supplementation may be indicated as a useful approach to improve glucose tolerance in individuals with insulin deficiency in and/or insulin resistance. In this regard, there is a growing scientific consensus that sustained stimulation of insulin release from pancreatic β-cells under conditions of chronic hyperglycemia may ultimately contribute to their depletion [20]. In view of this evidence, hypoglycemic molecules able to decrease glycemia without increasing insulinemia are highly desirable as they could improve the survival and function of pancreatic β-cells. On the other hand, although a wide variety of bioactive compounds of natural origin have been tested for their beneficial potential in the control of diabetic conditions [21,22], the evaluation of their bioavailability still represents a crucial aspect [23,24]. Identification of ABA as a plant hormone is usually performed with various methods, mainly in plant matrices, such as gas chromatography/mass spectrometry (GC/MS) [25] and immunological assay, i.e., enzyme-linked immunosorbent assay (ELISA) [26]. Although these methods are able to assess ABA concentration levels, they are affected by some disadvantages. For instance, ELISA assay requires a long preparation time and has low specificity and reproducibility, while GC/MS requires derivatization of the sample [25]. Based on such considerations, the present work focused on the development and validation of a method for the determination of ABA by liquid mass spectrometry analysis (LC-MS), through liquid–liquid extraction (LLE) in a biological matrix, i.e., serum. Subsequently, the optimized and validated method was applied to test its suitability on serum samples from eight healthy volunteers that consumed a standardized test meal (STM) with the concomitant supplementation with a nutraceutical product based on TN rich in ABA, to test the method in a real-world setting. Finally, the glycemic and insulinemic response in the above-mentioned subjects was evaluated in association with ABA serum levels at different time points of analysis. ## 2.1.1. Participants and Standardized Test Meal Composition Briefly, healthy subjects of both sexes were recruited in May 2019 by Samnium Medical Cooperative (Sant’Agata De’ Goti, Italy) as a subset of volunteers participating in a randomized clinical trial. The volunteers’ letter of intent, the protocol, and the synoptic documents of the study were submitted to the Scientific Ethics Committee of AO Rummo Hospital (Benevento, Italy). The study was approved by the committee (protocol no. 28, 15 May 2017) and was conducted in accordance with the Helsinki Declaration of 1964 (as revised in 2000). The study was listed on the ISRCTN registry (www.isrctn.com, accessed on 24 June 2022) with ID ISRCTN16732651. A total of 8 healthy subjects aged 18–83 years were invited to participate. Exclusion criteria were diabetes mellitus (DM) type 1 and type 2, liver, heart, or renal disease, drug therapy or intake of dietary supplements containing ABA, underweight (body mass index < 18.5 kg/m2), pregnancy or suspected pregnancy, birch pollen allergy. All participants received oral and written information about the study before giving written informed consent. Before inclusion in the study, volunteers were subjected to self-reporting questionnaires involving the following items: residence, occupation, smoking status, alcohol consumption, drug administration, and dietary habits. The volunteers meeting the inclusion criteria (body mass index 27–35 kg/m2; waist circumference, men ≥ 102 cm and women ≥ 88 cm) were assigned to consume a standardized test meal (STM), immediately after the administration of TN (1 g, lyophilized) containing 15 µg of ABA, as reported in our previous work [6]. TN treatment was self-administered as a tablet. The STM composition consisted of white bread (100 g) with 50 g of jam and 100 g of mozzarella and 200 mL of fruit juice. These amounts were chosen based on indications of a balanced meal, as they provided $50\%$ of calories from carbohydrates, $20\%$ from protein, and $30\%$ from fat [27]. ## 2.1.2. Experimental Procedures At the beginning of the study, the height, body weight, and waist circumference (WC) of all patients were measured and the Body Mass Index (BMI) was determined. Glucometabolic parameters were determined before the STM consumption as baseline, except for fasting plasma glucose (FPG) and fasting plasma insulin (FPI), which were evaluated before and after consuming the STM. After a 12-h fasting period, blood samples were collected to measure FPG, FPI, triglycerides (TG), total plasma cholesterol (TC), lipoprotein-cholesterol (HDL-C), alanine aminotransferase (ALT), aspartate aminotransferase (AST), and glycated hemoglobin (HbA1c). The concentration of the above-mentioned parameters was assayed by enzymatic colorimetric methods (Diacron International, Grosseto, Italy). The *Friedewald formula* was used to calculate LDL cholesterol levels. Plasma insulin levels were measured by ELISA (DIA-source ImmunoAssay S.A., Nivelles, Belgium) on a Triturus analyzer (Diagnostic Grifols S.A., Barcelona, Spain). HbA1c was measured using a commercially available kit (InterMedical s.r.l, Grassobbio, Italy). ## 2.2.1. Chemicals and Reagents The purity of ABA as primary standard was ≥$98\%$ HPLC and purchased from Sigma-Aldrich (Milan, Italy). Chromatographic-grade solvents, methanol, formic acid, and ethyl acetate were used (minimum purity $99.9\%$) and purchased by Sigma Aldrich, (Milan, Italy) as well as internal standard (IS), bis 4,4′- Sulfonyldiphenol, (BPS), (minimum purity $98\%$). Ultra-purified water Milli Q was produced in-house (conductivity 0.055 μS cm−1 at 25 °C, resistivity equals 18.2 MΩ·cm). ## 2.2.2. Real Sample Preparation and Extraction Vacu-test® tubes were employed to collect blood samples, collected from the antecubital vein (5 mL); the samples were immediately centrifuged at 2200 rpm for 20 min and the supernatant was frozen and stored at −80 °C until processing. Both samples, synthetic and real samples, underwent liquid–liquid extraction (LLE). Briefly, the sample preparation was performed according to the following procedures: 75 µL of serum were transferred to a 2 mL vial, spiked into 40 µL of BPS 100 ppb solution, to achieve a final concentration of 40 ppb, with the addition of 340 µL of methanol and 2 µL of 12 N HCl solution. Each sample was successively vortexed and stored in ice for 2 min. Afterwards, the samples were added to 500 µL ethyl acetate, vortexed, and finally centrifuged at 10.000 rpm for 5 min at 4 °C. The supernatant (a fixed volume of 700 µL) was transferred to a 4 mL vial, dried in Savant™ SpeedVac™ (Thermo Scientific™, Hyannis, MA, USA) and stored until the analysis. Dried samples were dissolved in 50 μL of CH3OH:H2O $\frac{50}{50}$ v/v, vortexed, and after 45 min to facilitate the dissolution, another 50 μL of CH3OH:H2O $\frac{50}{50}$ v/v was added. The samples were again centrifuged at 3.500 rpm for 5 min and the supernatant was transferred to a 1.5 mL glass insert and injected into liquid mass spectrometry (LC-MS). BPS was chosen for its lipophilicity feature as an internal standard (IS) to assess the recovery of each extraction. ## 2.2.3. Equipment Analytical determination was performed on an Ultimate 3000 LC system (Dionex/Thermo Scientific™, San Jose, CA, USA) coupled to a linear ion trap LTQ XL™ (Thermo Scientific™, San Jose, CA, USA), with an electrospray ionization source. The separation was performed on Luna® Omega 3 µm Polar C18 column (100 × 2.1 mm) (Phenomenex Torrance USA, Torrance, CA, USA). Tuning and data acquisition were carried out using Xcalibur and quantification using Qual Broswer software 4.4 version. ## 2.2.4. LC-MS/MS Conditions The samples, 5 μL of each, were injected from the Autosampler (Ultimate 300) and analyzed under the following chromatographic conditions: eluent A aqueous added to $0.1\%$ v/v formic acid and eluent B acetonitrile, added to $0.1\%$ v/v formic acid, flow rate set to 0.4 mL min−1, at a room temperature of 35 ± 2 °C. Gradient elution was accomplished as follows: 0–2.0 min, $5\%$ B; 2.0–9.0 min, $95\%$ B; 9.0–12.0 min, $95\%$ B; 12.1–16.0 min, $5\%$ B. All mobile phases were vacuum-filtered through 0.45 μm nylon membranes (Millipore®®, Burlington, MA, USA). The electrospray ionization (ESI) mass spectrometer (MS) was operated in negative ion mode using selective reaction monitoring (SRM) with nitrogen as the nebulizer, auxiliary, collision, and curtain gas. The main working source/gas parameters of mass spectrometer were optimized and maintained as follows: curtain gas, 8; nebulizer gas, 8. The instrumental parameters employed were as follows: ESI spray voltage in the negative-ion mode, 4 kV; sheath gas flow-rate, 70 arb; auxiliary gas flow-rate, 20 arb; capillary voltage, −38 V; capillary temperature, 350 °C; and tube lens, 95 V. ABA was monitored as [M-H]− ion according to its m/z values. ## 2.2.5. Calibration Curve and Linearity European validation guidelines were followed to validate the method [28]. Stock solutions of ABA were obtained dissolving the reference standard in $100\%$ methanol to obtain a final concentration of 2.000 ppm. Five solutions with different concentrations (40 ppb, 20 ppb, 10 ppb, 4 ppb, 2 ppb) were prepared by diluting this stock. The linearity ranges were tested using the average peak areas against the concentration (ppm) of ABA. Linear regression analysis and calibration curve parameters (Coefficient of Determination R2, slope, and intercept) were back-calculated from the peak areas using the regression line by the method of least squares, and mean accuracy values were determined [29,30]. ## 2.2.6. Limits of Detection (LOD) and Quantification (LOQ) LOD and LOQ were estimated as the concentrations providing signals equal to 3 and 10 times, respectively. They were calculated based on the following equations: LOD = SD∙3/S and LOQ = SD∙10/S [31], where SD is the standard deviation of the intercept response with the y-axis of the calibration curves, and S is the slope of a calibration curve. The spike level was 2 ppb in the appropriate range using a concentration and was assessed by running the measurement ten times. ## 2.2.7. Precision and Accuracy The method’s precision was evaluated by running five replicates of the sample repeated in the same day and in two different days to cover both intra-day and inter-day precision, expressed as relative standard deviation (RSD%). Repeatability was assessed using the nominal concentration of ABA (2 ppb). The accuracy of this method was determined considering samples spiked with 2–40 ppb of ABA (quality control samples, QCs) and evaluated at each level in triplicate, and reported as a percentage of the nominal value. ## 2.2.8. Selectivity Serum working calibration standards were prepared using sera already present in the archive of our laboratory and processed for other research, to assess the absence of ABA and that any signal interfered with the retention time of ABA. These sera, considered as blanks, were also employed to optimize the extraction process. ## 2.2.9. Carry-Over Carry-over effect of the method was evaluated by injecting methanol solvent after running the highest concentrated samples of ABA spiked in the serum (three times) and observing the occurrence of signals within the retention windows of the target chemicals. ## 2.2.10. Matrix Effect The matrix effect was investigated by calculating the ratio of the peak area in the presence of the matrix (matrix spiked with ABA post extraction) to the peak area in the absence of the matrix (ABA in methanol) [32]. The serum matrix blank was spiked with the analyte at each concentration of the linear range (2 ppb, 4 ppb, 10 ppb, 20 ppb, and 40 ppb). The ratio was calculated as follows:[1]Matrix effect %=peak area in presence of matrixarea in absence of matrix·100 ## 2.2.11. Recovery The recovery was assessed by evaluating the relative abundance of the BPS peak (I.S.) spiked before the extraction procedure and calculated as follows:[2]Recovery %=found concentrationstandard concentration·100 The results of the real samples were corrected for the recovery. ## 3.1. Anthropometric and Glucometabolic Parameters The characteristics of the patient population at baseline are shown in Table 1. A total ofeight healthy adults (three men and five women) aged 18 to 45 years with a BMI between 18 and 25 kg/m2 met the inclusion criteria and were therefore eligible to participate in the study. The group was well balanced in terms of demographic and clinical factors. ## 3.2. Two-Hour Glycemic and Insulinemic Responses to Standardized Test Meal Following the STM, which was preceded by administration of the nutraceutical supplement containing ABA, mean plasma glucose levels reached a peak at 30 min and gradually decreased to pre-prandial levels by 120 min. According to the plasma glucose response curve, the post-prandial insulinemic response curve peaked at 30 min and gradually declined to the pre-prandial level by 120 min (Figure 1). A similar trend can be observed for serum ABA concentrations after the consumption of STM and ABA-rich nutraceutical product in volunteers under our investigation, as shown in Figure 2. ## 3.3. Optimization of Chromatographic Method Different “synthetic” samples with known ABA concentrations, i.e., methanolic solutions and serums spiked with ABA, were used for the method development. These samples were subjected to the above-mentioned method in order to evaluate the efficiency in isolating and detecting abscisic acid in the context of complex biological matrices. The proposed method of extraction and quantification of ABA was easy to handle and sensitive to the analysis in serum matrix, optimizing the method after several changes in operating. For the extraction procedures, there were distinct organic solvents in various percentages with water. Ethyl acetate as an extraction solvent was the most efficient solvent to extract ABA from the serum matrix (data not shown). The spike levels (40.0 ppb and 2.0 ppb) were in the recommended range, i.e., calculated LOD < spike level < 10 × calculated LOD. For LC-MS analysis, we optimized the method using different stationary reversed-phases (Luna®® Omega 3 µm Polar C18 column (100 × 2.1 mm) (Phemomenex Torrance USA) and an Inertsil ODS-3 column (2.1 mm × 100 mm, 5 μm) (Torrance, CA, USA), and by a varying gradient elution program, to achieve an adequate resolution for the two analytes from the interferents. Optimal transitions were obtained for ABA (C15H20O4, MW: 264.32 g/mol) at m/z 152.000, and for BPS (C12H10O4S, MW: 250.27 g/mol) at m/z 107.000. The linear R-squared values (r2 = 0.9981) show a good linearity in the range of the calibration curves performed in the serum matrix from 40 ppb to 2 ppb. The sensitivity of the developed method is appreciable from the listed LOD and LOQ parameters, with values of 1.59 ppb and 5.31 ppb, respectively. The RSD% of within-run precision was $2.30\%$, while the RSD% between-run precision was $12.01\%$. Repeatability was performed using the repeatedly frozen and thawed ABA samples, and we did not observe any differences in the raw data and degradation products. Recovery from the serum matrix, evaluated at high and low spiking concentrations (40 ppb and 2.0 ppb), resulted in $70.3\%$. Matrix effect was $39.97\%$ and variations in the experimental parameters did not result in any appreciable change in the method performance. Table 2 summarizes all method validation parameters. These results demonstrate that the analytical method developed provides a reliable response relevant to the analysis of ABA in such a complex biological matrix. Selectivity is the ability of an analytical method to differentiate and quantify the analyte in the presence of other components in the sample. The selectivity of the method was evaluated by analyzing a blank sample, compared to a blank sample spiked with the lower limit of quantification LOQ (ABA equal to 2.00 ppb). As can be seen in Figure 3, the selectivity of this method was good. ## 4. Discussion In the present work, a method for the determination of ABA in biological samples by liquid–liquid extraction (LLE), followed by liquid mass spectrometry (LC-MS) of the extract, was developed and validated. The above-reported method has significant advantages, as it does not require expensive operations, in terms of procedures and amounts of solvents used, and leads to results with a good level of accuracy, reproducibility, LOD and LOQ values. These results are better in terms of sensitivity than those achieved by Reverse-Phase HPLC-DAD analysis on food and beverage matrices [33]. Moreover, to the best of our knowledge, the scientific literature reports methods for determination of ABA by LC/MS, but in a matrix other than serum, such as in *Arabidopsis thaliana* [31], Rose Leaf Samples [32], and fresh *Oryza sativa* tissues [33]. The scientific works analyzing ABA in the serum matrix are not focused on the validation method, and therefore, do not report validation parameters for comparison [34,35]. Anyway, new strategies to detect ABA with high sensitivity are under development, as fluorescent probes, but performed on plant tissues [36]. Moreover, the application of the optimized method on serum samples of healthy volunteers who consumed a STM together with a nutraceutical product rich in ABA allowed us to evaluate its applicability in a suitable biological model. Accordingly, the STM composition of the present work provided $50\%$ of calories from carbohydrates, $20\%$ from protein, and $30\%$ from fat, in agreement with the guidelines for balanced nutrition [27]. In this manner, the glycemic and insulinemic response, together with the increase in plasmatic ABA, was evaluated in the closest to real-life setting. The LC-MS analysis performed on the serums obtained from the eight volunteers showed different ABA levels at each time point. As observed in Figure 2, the found data confirmed the involvement of this endogenous hormone in the human response to glucose-containing foods. For all subjects, indeed, the serum ABA levels reached the highest concentration 30 min after the consumption of the STM and the nutraceutical product based on TN. In this regard, various studies carried out on human serums attempted to identify and quantify ABA levels, by performing different isolation and detection methods [11,15]. Specifically, plasma ABA levels have been shown to increase in normal glucose tolerant (NGT) subjects following an oral glucose load [14], but not in patients with T2D or in pregnant women with gestational diabetes mellitus (GDM). On the other hand, resolution of GDM one month after childbirth is associated with a restoration of the ABA response to glucose load [15]. Interestingly, a significant increase in ABA was observed in obese patients after biliopancreatic diversion (BPD), a bariatric surgery performed to reduce body weight and improve glucose tolerance, compared to pre-surgery levels [15]. Another observed difference between T2D and NGT individuals was related to fasting ABA values, which were significantly higher in T2D compared to NGT subjects (1.15 vs. 0.66 as median values, respectively). Nevertheless, the distribution of ABA values was found to be normal in NGT but not in T2D patients [15]. These alterations could be due to the heterogeneity of ABA-related dysfunction that occurs in T2D, such as the inability of ABA to increase in response to hyperglycemia or resistance to the activity of ABA. Overall, these observations suggest a role for ABA as a key hormone involved in the management of glucose homeostasis and highlight the importance of monitoring ABA levels in these groups of individuals. Notably, based on reports about daily consumption of fruits and vegetables containing ABA, epidemiological evidence indicates that the majority of the population assumes a very low intake of ABA from dietary sources [37]. Due to the multiple positive health effects attributed to the role of ABA [38], interest in supplementing this bioactive molecule through the administration of nutraceutical products rich in ABA is increasing over time, also in view of the nanomolar blood concentrations of this hormone required for its efficacy. ## 5. Conclusions In conclusion, we herein developed and validated a method for the extraction and LC-MS/MS analysis of ABA in biological samples. Even if limited by the small sample size, requiring therefore confirmation through larger clinical evaluation, an added value is represented by the successful application of this method to real samples, which allowed the evaluation of ABA serum changes after the consumption of STM and an ABA-rich nutraceutical product. Overall, the results shown could provide a starting point for determining the response to a glucose-containing meal in clinical practice, in terms of ABA concentration. Indeed, serum detection of this endogenous hormone may be considered as a marker to assess the presence of an impaired ABA response in dysglycemic subjects. Undoubtedly, the use of this analysis would be of great interest for clinical trials involving the chronic administration of ABA-rich nutraceutical supplements with hypoglycemic potential.
# Blocking Store-Operated Ca2+ Entry to Protect HL-1 Cardiomyocytes from Epirubicin-Induced Cardiotoxicity ## Abstract Epirubicin (EPI) is one of the most widely used anthracycline chemotherapy drugs, yet its cardiotoxicity severely limits its clinical application. Altered intracellular Ca2+ homeostasis has been shown to contribute to EPI-induced cell death and hypertrophy in the heart. While store-operated Ca2+ entry (SOCE) has recently been linked with cardiac hypertrophy and heart failure, its role in EPI-induced cardiotoxicity remains unknown. Using a publicly available RNA-seq dataset of human iPSC-derived cardiomyocytes, gene analysis showed that cells treated with 2 µM EPI for 48 h had significantly reduced expression of SOCE machinery genes, e.g., Orai1, Orai3, TRPC3, TRPC4, Stim1, and Stim2. Using HL-1, a cardiomyocyte cell line derived from adult mouse atria, and Fura-2, a ratiometric Ca2+ fluorescent dye, this study confirmed that SOCE was indeed significantly reduced in HL-1 cells treated with EPI for 6 h or longer. However, HL-1 cells presented increased SOCE as well as increased reactive oxygen species (ROS) production at 30 min after EPI treatment. EPI-induced apoptosis was evidenced by disruption of F-actin and increased cleavage of caspase-3 protein. The HL-1 cells that survived to 24 h after EPI treatment demonstrated enlarged cell sizes, up-regulated expression of brain natriuretic peptide (a hypertrophy marker), and increased NFAT4 nuclear translocation. Treatment by BTP2, a known SOCE blocker, decreased the initial EPI-enhanced SOCE, rescued HL-1 cells from EPI-induced apoptosis, and reduced NFAT4 nuclear translocation and hypertrophy. This study suggests that EPI may affect SOCE in two phases: the initial enhancement phase and the following cell compensatory reduction phase. Administration of a SOCE blocker at the initial enhancement phase may protect cardiomyocytes from EPI-induced toxicity and hypertrophy. ## 1. Introduction Anthracyclines listed in the 22nd (the latest) version of the World Health Organization (WHO) model list of essential medicines are among the most efficacious and widely used chemotherapy drugs since the late 1960s [1]. Epirubicin (EPI) belongs to the anthracycline family; it is often used together with new generation targeted drugs and play a major role in the modern era of cancer treatment. EPI kills cancer cells likely via multiple mechanisms, including DNA adduct formation, reactive oxygen species (ROS) production, and lipid peroxidation. While EPI makes a great contribution to the improvement of treatment outcomes, dose-limiting cardiotoxicity hinders its clinical application and often leads to requirements for regimen modification or even discontinuation [2]. The anthracycline-induced cardiotoxicity can be manifested either acutely during the treatment period or chronically, from several weeks to even years after treatment has stopped [3]. The associated cardiac dysfunction has a broad range of symptoms including cardiac hypertrophy, cardiomyopathy, and ultimately congestive heart failure [4]. Cardiac hypertrophy is the enlargement of the heart, which can be divided into two categories: physiological and pathological, both of which develop as an adaptive response to cardiac stress, but their underlying molecular mechanisms, cardiac phenotype and prognosis are distinctly different. For example, Ca2+ signaling-related genes are only changed in pathological hypertrophy but not in physiological hypertrophy [5]. Studies have revealed that intracellular Ca2+ regulates the calcineurin–NFAT signaling pathway and thus initiating hypertrophy-related gene transcription [6,7,8,9]. An increase in intracellular Ca2+ leads to the activation of the phosphatase activity of calcineurin, the dephosphorylation of NFAT family members, and their translocation to the nucleus to initiate gene transcription [6]. Store-operated Ca2+ entry (SOCE) is a ubiquitous Ca2+ entry pathway activated in response to the depletion of sarcoplasmic or endoplasmic reticulum (SR/ER) Ca2+ stores. Although SOCE has been well-studied in non-excitable cells and skeletal muscles, the understanding of its important role in cardiomyocytes is emerging [10,11]. SOCE machinery components, including stromal interaction molecule 1 (Stim1) as an ER Ca2+ sensor and Orais and transient receptor potential channels (TRPCs) as plasma membrane Ca2+ channels, have been shown to be essential for heart development and to regulate heart remodeling after stress [12]. Accumulating evidence also shows enhanced SOCE in cardiac hypertrophy and heart failure [7,8,9]. While dysregulated Ca2+ signaling has been reported to contribute to EPI-induced cardiotoxicity [13,14,15], whether SOCE plays a role in this process and in the consequent cardiac remodeling remains unknown. Thus, the objective of the present study is to determine the specific role of SOCE in EPI-induced cell apoptosis and hypertrophy in cardiomyocytes. ## 2.1. Chemicals and Reagents Claycomb cell culture medium was purchased from Sigma-Aldrich. FBS (fetal bovine serum), PBS (phosphate-buffered saline), HBSS (Hanks’ balanced salt solution), and penicillin/streptomycin antibiotic were purchased from Invitrogen/Thermofisher Scientific Pittsburgh, PA, USA. Other reagents used include BTP2 and ML204 (Millipore Sigma, St. Louis, MO, USA), EPI (Alfa Aesar, Haverhill, MA, USA), thapsigargin (TG, Adipogen, San Diego, CA, USA), fura-2 AM (Biotium 50033, Fremont, CA, USA), DAPI (Invitrogen D357, Carlsbad, CA, USA), and phalloidin (Enzo BML-T111, New York, NY, USA). ## 2.2. Cell Culture HL-1 cardiomyocytes were maintained in *Claycomb medium* supplemented with $10\%$ FBS, 100 U/mL penicillin, 100 ug/mL streptomycin, 0.1 mM norepinephrine, and 2 mM L-glutamine [16,17]. HL-1 cells were cultured at 37 °C in a humidified $5\%$ CO2 incubator. ## 2.3. Measurement of Intracellular Ca2+ Concentration Intracellular Ca2+ concentrations in the HL-1 cell line was measured following previously published procedures [17]. In brief, the intracellular Ca2+ was measured using a fluorescence microscope with a SuperFluo 40× objective (N.A. 1.3) connected to a dual-wavelength spectrofluorometer (Horiba Photon Technology International, Piscataway, NJ, USA). The excitation wavelengths were set at 350 nm and 385 nm and the emission wavelength was set at 510 nm. Cells were loaded with 5 μM fura-2 acetoxymethyl ester (Biotium, Fremont, CA, USA) for 30 min at 37 °C in the dark. Cellular endoplasmic reticulum (ER) Ca2+ stores were depleted by 10 μM TG in 0.5 mM EGTA dissolved in balanced salt solution (140 mM NaCl, 2.8 mM KCl, 2 mM MgCl2, 10 mM HEPES, pH 7.2). SOCE was observed upon the rapid exchange of extracellular solution to bath saline containing 2 mM CaCl2 at indicated time. The intracellular Ca2+ elevation was presented as ΔF350 nm/F385 nm. ## 2.4. Cytotoxicity Assay HL-1 cells were seeded at 1.5 × 105 cells per well in a 29 mm glass-bottom dish. The cells were treated with vehicle, 20 µM BTP2, 1 µM EPI, or 20 µM BTP2 plus 1 µM EPI for 5 h. Then, the culture medium was removed, and cells were fixed with $4\%$ paraformaldehyde for 10 min at room temperature. The paraformaldehyde was removed and then cells were immersed in $0.1\%$ Triton X-100 in PBS for 10 min, washed with PBS twice, followed by incubation with PBS containing 6.6 µM phalloidin (Enzo, New York, NY, USA) for 15 min. The cells were washed with PBS three times, and then counter staining with PBS containing 1 µg/mL DAPI (1:500) for 5 min at room temperature in the dark. The cells were then washed with PBS twice and immersed in ProlongTM Gold antifade reagent (Life Technologies Corporate, Eugene, OR, USA). The fluorescence signals were observed using a DMi8 inverted microscope (Leica, Wetzlar, Germany) with a 40× objective (NA 1.3). The excitation/emission wavelengths set for DAPI and phalloidin were $\frac{405}{430}$ nm and $\frac{547}{572}$ nm, respectively. The imaging was performed at room temperature. ## 2.5. Western Blotting Analysis HL-1 cardiomyocytes were lysed in modified RIPA buffer (150 mM NaCl, 50 mM Tris-Cl, 1 mM EGTA, $1\%$ Triton X-100, $0.1\%$ SDS, and $1\%$ sodium deoxycholate, pH 8.0) containing protease inhibitors cocktail (Sigma-Aldrich, US) as previously described [18,19]. Protein concentration was quantified using a BCA kit (ThermoFisher, Pittsburgh, PA, USA). Equal amounts of proteins were loaded onto SDS polyacrylamide gels, and the separated proteins were transferred to PVDF membranes (Bio-Rad, Hercules, CA, USA). The blot was incubated with $5\%$ non-fat dry milk blocking buffer (Bio-Rad, Hercules, CA, USA) for 1 h at room temperature and probed with specific primary antibodies in blocking buffer at 4 °C overnight. The primary antibodies used in this study included anti-caspase-3 (1:1000, catalog #9662, Cell Signaling Technology, Massachusetts, MA, USA) and anti-GAPDH (1:1000, GeneTex, Irvine, CA, USA). The next day, the blots were washed with PBST three times followed by incubation with secondary antibodies including the appropriate horse radish peroxidase (HRP)-conjugated goat anti-rabbit IgG (1:5000, Cell Signaling Technology, Massachusetts, USA) and anti-mouse IgG (1:5000, Cell Signaling Technology, USA). Signals were detected using the ECL detection method on a ChemiDoc instrument. ## 2.6. Cell Size Measurement HL-1 cells seeded at 1 × 106 cells per well in a 6-well plate were treated with vehicle or 20 µM BTP2, 1 µM EPI, or 20 µM BTP2 plus 1 µM EPI for 5 h followed by switching to normal culture media for 24 h. The cells were then observed and phase contrast imaging was conducted using a DMi8 inverted microscope (Leica, Wetzlar, Germany). The cell surface area was quantified using ImageJ and Graphpad 6 software. ## 2.7. Quantitative Reverse Transcription Polymerase Chain Reaction (qRT-PCR) Total RNAs were extracted from HL-1 cells using Illustra RNAspin MiniRNA Isolation Kit and the quality and concentration of RNA were evaluated by photometrical measurement of $\frac{260}{280}$ nm. The primers were obtained from Sigma Aldrich. Four hundred nanograms of total RNA was applied for reverse transcription using the qScript microRNA Synthesis Kit (QuantaBio, Beverly, MA, USA) following the manufacturer’s protocol. cDNA was diluted 1:5 in DNase-, RNase-, and protease-free water and 2 μL template was used for PCR. The primer pairs for BNP and GAPDH were used. The sequences for the BNP primers are forward (5′–3′) GCCAGTCTCCAGAGCAATTC and reverse (5′–3′) TCTTTTGTGAGGCCTTGGTC. The sequences for the GAPDH primers are forward (5′–3′) AGGTCGGTGTGAACGGATTTG and reverse (5′–3′) TGTAGACCATGTAGTTGAGGTCA. For qRT-PCR, QuantaBio PerfecTa SYBR Green FastMix ROX was used according to the manufacturer’s procedure. The signals generated by integration of SYBR Green into the amplified DNA were detected in a real-time machine (StepOne Plus Real-Time PCR System, ThermoFisher Scientific, USA). Data were expressed as 2-ΔΔCT relative to GAPDH gene expression. ## 2.8. Immunofluorescence Staining Cells were seeded into 29 mm glass-bottom dishes. The cells were fixed with $4\%$ paraformaldehyde for 10 min at room temperature. The paraformaldehyde was then removed and the cells were immersed in PBS containing $0.1\%$ Triton X-100 for 10 min. After washing with PBS three times, the cells were blocked in PBS containing $0.1\%$ Triton X-100 supplemented with $10\%$ horse serum for 30 min at room temperature. Then, the cells were incubated with rabbit anti-NFAT4 primary antibody (1:100, ProteinTech, Rosemont, IL, USA) in blocking solution at 4 °C overnight. The next day, the cells were taken out and washed with PBS three times, then incubated with Alexa Fluor 488-labelled secondary antibody (1:500, Abcam, Cambridge, MA, USA) at room temperature in the dark for 1 h to visualize the expression and localization of NFAT4. The cells were counter-stained with PBS containing 1 μg/mL DAPI (1:500) for 5 min at room temperature in the dark and then immersed in ProlongTM Gold antifade reagent (Life Technologies Corporate, Eugene, OR, USA). Images were taken using a Nikon A1R HD25 LSM confocal microscope with a 40× oil immersion objective (NA 1.3) using GFP and DAPI filters (Ex: $\frac{488}{405}$; Em: $\frac{509}{430}$ nm). ## 2.9. RNA-Seq Data Analysis The RNA-Seq dataset GSE217421 was used [20]. Different human induced pluripotent stem cell (iPSC)-derived cardiomyocyte cell lines were treated with 2 µM EPI or DMSP for 48 h, followed by bulk RNA-seq analysis. Differentially expressed gene were identified between drug- and control treated cell lines. A total 17 EPI samples and 56 control samples covering five different cell types were used with each cell type having a different number of replicates as shown in Table S2. Table S3 shows all the control cell lines and replicate numbers. Two-way ANOVA of the effects of treatment (EPI vs. Con) and cell lines (five cell lines) was used for analysis. ## 2.10. Statistical Analysis Data were analyzed using Graphpad Prism 6 software (Boston, MA, USA) unless indicated otherwise. The results were presented as mean ± standard deviation (SD) or as otherwise indicated. Comparisons between two groups were analyzed using a Student’s t-test. Comparisons among more than two groups were analyzed using one-way analysis of variance (ANOVA) followed by Bonferroni post hoc analysis. A p value of <0.05 was considered statistically significant in all experiments except the RNA-seq data analysis. ## 3.1. SOCE Machinery Genes Were Downregulated by EPI Treatment in Human iPSC-Derived Cardiomyocytes RNA-seq data analysis of human iPSC-derived cardiomyocytes showed that cells treated with 2 µM EPI for 48 h had significantly reduced expression of SOCE machinery genes, i.e., Orai1, Orai3, TRPC3, TRPC4, Stim1, and Stim2, and increased expression of TRPC2 (Figure 1A, Table S1). The expression of Orai2, TRPC1, TRPC5, and TRPC6 were similar between the EPI and control groups. To confirm the changes in SOCE in live cells, HL-1, a cardiomyocyte cell line derived from adult mouse atria was used for its easy culture and well-characterized cardiomyocyte properties. After being treated with 1 µM EPI or vehicle control ($0.1\%$ DMSO) for 6 h, HL-1 cells were loaded with 5 µM fluorescent Ca2+ indicator fura-2 AM at 37 °C in the dark for 30 min. The ER Ca2+ stores were depleted by 10 μM TG in BSS containing 0.5 mM EGTA. When re-introducing BSS containing 2 mM CaCl2, the intracellular Ca2+ level (presented as F350 nm/F385 nm) was monitored using live cell imaging and the SOCE was calculated as the difference (ΔF350/F385) between the peak and baseline before the addition of 2 mM Ca2+. Compared to vehicle control (black curve), SOCE was significantly reduced in the EPI-treated HL-1 cells (red curve) (Figure 1B,C). ## 3.2. Acute Treatment of EPI Increased SOCE in HL-1 Cardiomyocytes In addition to transcriptional regulation, EPI can increase ROS production and lipid peroxidation. Oxidative stress has been shown to promote STIM1 oligomerization and alter channel activity [21]. We next examined whether acute treatment of EPI and its resulting oxidative stress can affect SOCE in HL-1 cells. Administration of BTP2, a SOCE inhibitor, significantly decreased ΔF350/F385 (0.142 ± 0.064) compared with that of the vehicle-treated control cells (0.188 ± 0.058, $$n = 35$$; ** $$p \leq 0.0051$$). This data confirmed the presence of BTP2-sensitive SOCE in HL-1 cardiomyocytes (Figure 2A,B,E). Contrary to prolonged treatment, acute treatment of EPI for only 30 min resulted in significantly enhanced SOCE in HL-1 cells (0.254 ± 0.069, $$n = 37$$) compared to those treated with the vehicle control (0.188 ± 0.058, $$n = 37$$; **** $p \leq 0.0001$). Addition of BTP2 could significantly decrease SOCE (0.045 ± 0.027, $$n = 36$$) in EPI-treated HL-1 cells compared to those treated with EPI alone (0.254 ± 0.069, $$n = 36$$; **** $p \leq 0.0001$), indicating that pharmacologically inhibiting SOCE with BTP2 can reduce the EPI-enhanced SOCE in HL-1 cells. Furthermore, ML204, a relative specific TRPC4 inhibitor could significantly reduce SOCE in HL-1 cells as well (Supplemental Figure S1). ## 3.3. BTP2 Diminished EPI-Induced ROS Production in HL-1 Cardiomyocytes The reciprocal regulation between mitochondria and intracellular Ca2+ suggests that SOCE may regulate mitochondrial ROS production. Thus, ROS were measured by using DHE dye in HL-1 cells treated with EPI (Figure 3). In the HL-1 cells treated with 1 μM EPI for 30 min, ROS levels were significantly increased compared to that in vehicle control cells. Interestingly, BTP2 was able to significantly inhibit ROS production in HL-1 cells even in the presence of EPI (Figure 3). ## 3.4. BTP2 Inhibited EPI-Induced Apoptosis in HL-1 Cardiomyocytes Disruption of F-actin is a hallmark for apoptosis [22]. We next examined the expression of F-actin in HL-1 cells using phalloidin staining. Reduced expression of F-actin was observed in cells treated with 1 µM EPI for 5 h compared to that of vehicle-treated control cells (Figure 4A), suggesting that EPI induced apoptosis in HL-1 cardiomyocytes. When co-treated with 20 µM BTP2, the EPI-induced degradation of F-actin was partially rescued (Figure 4A,B), indicating that BTP2 inhibited EPI-induced F-actin disruption. Anthracyclines have been shown to induce apoptosis in HL-1 cardiomyocytes through caspase-3 [23]. We then examined the levels of cleaved caspase-3 in HL-1 cells. EPI induced abundant amounts of cleaved caspase-3, evidenced by the Western blot analysis (Figure 4C,D). The EPI-increased level of cleaved caspase-3 was significantly diminished by co-treatment with 20 µM BTP2. Consistent with the data from F-actin degradation, the cleaved caspase-3 analysis again indicated that EPI induced apoptosis in HL-1 cardiomyocytes, which could be alleviated by BTP2. ## 3.5. BTP2 Inhibited EPI-Induced Hypertrophy in HL-1 Cardiomyocytes EPI-induced cardiac remodeling includes hypertrophy. SOCE plays a major role in pathophysiological hypertrophy. We thus examined whether BTP2 can inhibit EPI-induced hypertrophy in HL-1 cardiomyocytes. HL-1 cells were treated with vehicle control, 1 µM EPI, or co-treated with 1 µM EPI and 20 µM BTP2 for 5 h, followed by drug withdrawal and then growth in normal culture medium for 24 h. Phase contrast images were then taken of these cells and the surface area of the HL-1 cardiomyocytes was measured and quantified. EPI treatment increased the size of cardiomyocytes to almost twice that of vehicle-treated control cardiomyocytes (Figure 5A,B). In the BTP2 and EPI co-treatment group, the size of HL-1 cells was significantly reduced compared to that in the EPI group. The expression of brain natriuretic peptide (BNP), a specific marker of cardiac hypertrophy [22], was also examined. As shown in Figure 5C, the mRNA level of BNP was significantly increased upon the treatment with 1 µM EPI (4.861 ± 0.697, $$n = 9$$) compared to that of vehicle-treated cells (control, 1.010 ± 0.155, $$n = 9$$). Consistent with the cell size analysis, BTP2 could significantly alleviate EPI-induced BNP expression (3.054 ± 0.260) in HL-1 cells. These data indicate that blocking SOCE by BTP2 can reduce EPI-induced hypertrophy in HL-1 cardiomyocytes. ## 3.6. BTP2 Inhibited EPI-Induced NFAT4 Nuclear Translocation in HL-1 Cardiomyocytes Nuclear factor of activated T cells (NFAT) was reported to be a critical nuclear transcriptional factor regulating cardiac hypertrophy [24]. We lastly examined whether SOCE contributes to EPI-induced hypertrophy through the NFAT pathway in HL-1 cells. Since NFAT4 is the most abundant one out of the five subtypes of NFAT expressed in cardiomyocytes [25], we focused on NFAT4 in this study. After being treated with vehicle, 1 µM EPI, 20 µM BTP2, or 1 µM EPI combined with 20 µM BTP2 for 5 h, HL-1 cells were cultured in growth media for another 24 h until fixation and immunostaining with anti-NFAT4 antibody. HL-1 cells treated with 10 µM ionomycin for 15 min were used as a positive control for NFAT4 immunostaining since ionomycin is a strong activator for NFAT signaling [26]. The nuclear translocation of NFAT4 was examined by confocal microscopy imaging. As shown in Figure 6, EPI treatment induced NFAT4 nuclear translocation (indicated by the white arrows), whereas co-treatment with BTP2 showed minimal NFAT4 nuclear translocation. This data suggested that the EPI-induced nuclear translocation of NFAT4 was inhibited by BTP2 in HL-1 cells. ## 4. Discussion EPI is a widely used anthracycline chemotherapy drug, but it also causes cardiotoxicity and results in heart remodeling and even failure. This study confirmed that EPI can induce ROS production, cell apoptosis, and hypertrophy in cardiomyocytes. Furthermore, this study showed that acute treatment of EPI can increase SOCE in HL-1 cells and blocking SOCE by BTP-2 not only reduced EPI-enhanced SOCE (Figure 2), but also alleviated EPI-induced apoptosis (Figure 4) and hypertrophy (Figure 5). Although SOCE has been associated with hypertrophy in cardiomyocytes and heart failure, this study provides the first evidence, to our knowledge, that SOCE plays a key role in EPI-induced cardiotoxicity and hypertrophy. More importantly, this study may shed light on developing an approach to alleviate EPI-induced cardiotoxicity by targeting SOCE in the initial phase of EPI treatment (working model is shown in Figure 7). It is well-known that SOCE has a complex nature and co-exists with other Ca2+ influx mechanisms, such as receptor-operated Ca2+ entry (ROCE). SOCE machinery may contain several molecules as channel complexes at the plasma membrane interacting with STIMs at the SR/ER. Previous reports showed that Orai1 is expressed in HL-1 cells and knockdown of Orai1 could abolish SOCE in HL-1 cells [27]. In addition, TRPC1, $\frac{3}{6}$, and 4 may also form SOCE channel complexes in hypertrophic cardiomyocytes [24] and STIM1 can bind and regulate TRPC1, TRPC4, and TRPC5 [28]. The current study showed evidence for a bona fide, BTP2-sensitive SOCE in HL-1 cells. Since BTP2 can block both Orai [29] and TRPC channels [30], our current data cannot exactly pinpoint whether Orais or TRPCs mediate SOCE in hypertrophic cardiomyocytes. RNA-seq analysis showed that treatment of EPI significantly reduced Orai1, Orai3, TRPC3, and TRPC4 expression in human iPSC-derived cardiomyocytes, which is consistent with reduced SOCE (Figure 1A, Supplementary Table S1). Interestingly, ML204, a relatively selective blocker of TRPC4 could significantly reduce SOCE in HL-1 cells (Supplementary Figure S1). These data suggest that these Orais and TRPCs may contribute to SOCE in cardiomyocytes. Future investigation is required to dissect the exact components in the SOCE channel complex in cardiomyocytes, which contribute to EPI-induced cardiotoxicity. After cardiomyocytes survived the cardiotoxicity after EPI treatment, they may undergo cell remodeling which leads to hypertrophy, cardiac remodeling, and eventual heart failure. SOCE plays a major role in the pathogenesis of heart hypertrophy. Numerous studies suggest that pathological stimuli activate SOCE and further trigger the NFAT signaling cascade, which is critical for the regulation of growth gene expression and promotion of cardiomyocyte hypertrophy. Suppression of SOCE machinery genes, such as STIM1 and Orai1, attenuates the hypertrophic responses to pressure overload or agonists [31,32]. Our current findings are in line with these previous reports, indicating that EPI-induced cardiomyocyte hypertrophy could also be inhibited by SOCE blocker. Interestingly, RNA-seq data analysis of human iPSC-derived cardiomyocytes showed that cells treated with 2 µM EPI for 48 h had significantly reduced expression of SOCE machinery genes, e.g., Orai1, Orai3, TRPC3, TRPC4, Stim1, and Stim2 (Figure 1). Intracellular Ca2+ measurement in live HL-1 cells confirmed that SOCE was indeed reduced in cardiomyocytes treated with EPI for 6 h (Figure 1B,C) or longer. The apparent discrepancy suggests that EPI may affect SOCE in two phases: the initial enhancement phase followed by cell compensatory reduction phase. The initial enhancement phase is likely due to immediately increased ROS production and lipid peroxidation right after administration of EPI. The rapid generation of ROS has been best studied in myocardial ischemia–reperfusion models [33]. In addition to the regulatory roles of ROS in many cellular events [34], oxidative stress is also able to promote STIM1 oligomerization, deplete ER Ca2+, and active SOCE [21]. Since there is a reciprocal regulation between mitochondria and SOCE, EPI-triggered initial mitochondrial ROS production could be further amplified by enhanced SOCE, which is supported by the evidence that blocking SOCE by BTP2 attenuated EPI-triggered ROS production (Figure 3). Additionally, ROS is also known to directly activate TRPCs channels [35,36]. During the initial enhancement phase, EPI triggers the apoptotic pathway in cardiomyocytes. The surviving cardiomyocytes from the initial phase may develop compensatory mechanisms at the transcription level. This may explain why prolonged treatment of EPI (at 48 h) resulted in a reduction in the expression of SOCE machinery genes. Chemotherapeutic agents (anthracycline therapy in particular) have been reported to damage the F-actin of cells. In cardiac H9c2 cells, doxorubicin reduces number of F-actin filaments, especially at higher concentrations [37]. The reorganization of F-actin filaments and characteristic features of apoptosis have also been reported in Chinese hamster ovary cells, pancreatic β cells, breast cancer cells, and other cells upon doxorubicin treatment [38,39,40]. Others and our previous studies suggest that SOCE is an effective chemotherapy drug target [18,19,41,42,43]. The findings of the present study have shown that SOCE contributes to EPI-induced cardiotoxicity, indicating that SOCE blockers may be able to protect cardiomyocytes from the side effects of anthracycline chemotherapy drugs. Together, the results suggest that SOCE blockers may be dual-function drugs for both chemotherapy and cardio-protection.
# CRP/Albumin Ratio and Glasgow Prognostic Score Provide Prognostic Information in Myelofibrosis Independently of MIPSS70—A Retrospective Study ## Abstract ### Simple Summary To assess prognosis in myelofibrosis (MF), age and degree of anemia and leukocytosis are taken into account together with the presence of blasts in the peripheral blood and constitutional symptoms (fever, night sweats, weight loss). The latter are signs of systemic inflammation, which plays a pivotal role in MF pathophysiology. Considering information about genetic changes can refine prognostication. The goal of our retrospective study was to assess the prognostic impact of two laboratory markers of inflammation that are readily available in clinical routine at low costs: C-reactive protein (CRP) and albumin. We found a significant prognostic impact of both parameters either alone or combined within the CRP/albumin ratio or the Glasgow Prognostic Score, which was independent of the Mutation-Enhanced International Prognostic Scoring System (MIPSS)-70. Therefore, assessing CRP and albumin helps to identify a vulnerable population of MF patients, which eludes current prognostic models, even if the presence of high-risk mutations is considered. ### Abstract In myelofibrosis, the C-reactive protein (CRP)/albumin ratio (CAR) and the Glasgow Prognostic Score (GPS) add prognostic information independently of the Dynamic International Prognostic Scoring System (DIPSS). Their prognostic impact, if molecular aberrations are considered, is currently unknown. We performed a retrospective chart review of 108 MF patients (prefibrotic MF $$n = 30$$; primary MF $$n = 56$$; secondary MF $$n = 22$$; median follow-up 42 months). In MF, both a CAR > 0.347 and a GPS > 0 were associated with a shorter median overall survival (21 [$95\%$ CI 0–62] vs. 80 months [$95\%$ CI 57–103], $p \leq 0.001$ and 32 [$95\%$ CI 1–63] vs. 89 months [$95\%$ CI 65–113], $p \leq 0.001$). Both parameters retained their prognostic value after inclusion into a bivariate Cox regression model together with the dichotomized Mutation-Enhanced International Prognostic Scoring System (MIPSS)-70: CAR > 0.374 HR 3.53 [$95\%$ CI 1.36–9.17], $$p \leq 0.0095$$ and GPS > 0 HR 4.63 [$95\%$ CI 1.76–12.1], $$p \leq 0.0019.$$ *An analysis* of serum samples from an independent cohort revealed a correlation of CRP with levels of interleukin-1β and albumin with TNF-α, and demonstrated that CRP was correlated to the variant allele frequency of the driver mutation, but not albumin. Albumin and CRP as parameters readily available in clinical routine at low costs deserve further evaluation as prognostic markers in MF, ideally by analyzing data from prospective and multi-institutional registries. Since both albumin and CRP levels reflect different aspects of MF-associated inflammation and metabolic changes, our study further highlights that combining both parameters seems potentially useful to improve prognostication in MF. ## 1. Introduction Both primary and secondary myelofibrosis (PMF/SMF) are caused by a complex interplay of (epi)genetic alterations in hematopoietic stem cells and inflammatory changes, which affect hematopoiesis and impact patient survival [1,2]. The Dynamic International Prognostic Scoring System (DIPSS) as a standard tool for prognostication considers age, anemia, leukocytosis, peripheral blast counts and constitutional symptoms [3]. It can be refined by incorporating information about cytogenetic aberrations, mutational profile or both [4]. In addition to these complex and expensive parameters, some routine laboratory markers add prognostic information, such as C-reactive protein (CRP) and albumin. Elevated CRP levels have been associated with several adverse disease features and a shorter leukemia-free survival [5,6], and albumin has been consistently shown to add additional prognostic information independently of DIPSS and several DIPSS-based prognostic scoring systems [7,8,9]. Furthermore, indices combining CRP and albumin such as the CRP/albumin ratio (CAR) [10] or the Glasgow Prognostic Score (GPS) [11] provide DIPSS-independent prognostic information in MF. With regard to both CAR and GPS, it remains elusive as to whether they still add prognostic value if the molecular risk profile is considered. We therefore examined the prognostic impact of CAR and GPS in relation to the Mutation-Enhanced International Prognostic Scoring System (MIPSS)70, which includes the mutational profile without needing conventional metaphase cytogenetics [12]. ## 2. Patient Population and Methods We performed a retrospective chart review of patients diagnosed with MF at the Cantonal Hospital St. Gallen between 2000 and 2020 (Cohort A). One hundred and eight patients were identified (47 female and 61 male, median age 72; pre-fibrotic MF: $\frac{30}{108}$ ($28\%$), PMF $\frac{56}{108}$ ($52\%$) and SMF $\frac{22}{108}$ ($20\%$)), and clinical and laboratory data were collected at the time of diagnosis and before commencement of treatment. All of the cases were reviewed individually, to ensure correct classification according to WHO2016 [13]. If the diagnostic work-up did not include next-generation sequencing (NGS), we performed mutational profiling using material from the diagnostic samples (see the Supplementary Materials “Supplementary Methods”). Detailed patient characteristics of the cases with MF in cohort A (PMF and SMF) are shown in Table 1. The CAR was calculated by dividing the CRP concentration (mg/L) by the albumin concentration (g/L). The GPS was determined according to [14] (GPS 0: albumin ≥ 35 g/L and CRP ≤ 10 mg/L; GPS 1: either albumin < 35 g/L or CRP > 10 mg/L; GPS 2: both albumin < 35 g/L and CRP > 10 mg/L). For CRP, we used the upper limit of normal from our local laboratory for dichotomization (≤/>8 mg/L), and for albumin, the median of our population was used (</≥40 g/L). For the CAR, we used a cut-off of </≥0.204, as proposed by [10] and a CAR of </≥0.374, representing the fourth quartile of our cohort. The methods applied for the statistical analysis are described in detail in the Supplementary Materials. Plasma probes from an independent Canadian cohort (Cohort B) of 64 MPN patients (MF $$n = 28$$, PV $$n = 18$$, ET $$n = 18$$; Supplementary Table S1) and healthy controls ($$n = 16$$) were available to assess the correlation of high-sensitivity (hs)CRP and albumin levels with pro-inflammatory cytokines, which were measured as described in detail in the Supplementary Materials. ## 3.1. Levels of CRP and Albumin, the CAR in Different MF Subgroups and Their Association with Disease Characteristics Within Cohort A, we found higher levels of conventional CRP in patients with MF (PMF: $$n = 56$$, median 5 mg/L, [IQR 2–18], SMF: $$n = 22$$, median 5 mg/L [IQR 3–9]) compared to pre-fibrotic MF ($$n = 30$$, median 1 mg/L, [IQR 1–8], $$p \leq 0.034$$). With regard to the albumin concentration, we found no difference (PMF median 40.5 g/L [IQR 37–42.6], SMF median 39 g/L [IQR 36.4–42.7], pre-fibrotic MF median 42 g/L [IQR 38–43.6], $$p \leq 0.253$$). In MF, a CRP-elevation > 8 mg/L was associated with lower levels of hemoglobin and platelets, a higher percentage of peripheral blasts, higher LDH-levels, transfusion-dependency and the presence of constitutional symptoms, whereas levels of albumin < 40 g/L were associated only with the degree of anemia and with a lower body mass index (BMI), as shown in Table 1. An additional comparison of disease characteristics following the cut-offs used within the GPS (CRP ≤/> 10 mg/L and albumin </≥ 35 g/L) is provided in Supplementary Table S2. There was no difference in CRP, albumin and the CAR between JAK2-mutated cases and CALR-mutated cases. With regard to JAK2-V617F variant allele frequency (VAF), we observed a significantly higher CAR in patients with a VAF > $50\%$ (median 0.243 vs. 0.095, $$p \leq 0.035$$) and a trend towards higher CRP values (median 7.5 vs. 4.5 mg/L, $$p \leq 0.071$$). No difference was noted for albumin (median 39 vs. 38 g/L, $$p \leq 0.158$$). Patients with high-risk mutations according to MIPSS70 showed a tendency towards a higher CAR (median 0.579 vs. 0.115, $$p \leq 0.051$$) but did not differ significantly with regard to the single parameters. Further details are shown in Supplementary Table S3. MIPSS70 was available for $\frac{59}{78}$ patients ($76\%$): intermediate risk $\frac{43}{59}$ ($72.9\%$), high risk $\frac{14}{59}$ ($23.7\%$) and low-risk $\frac{2}{59}$ ($3.4\%$). Overall survival (OS) different significantly among these groups (Supplementary Figure S1). Compared to the MIPSS70-intermediate patients, the MIPSS70-high-risk patients had significantly higher CRP levels (median 14 mg/L [IQR 5–30] vs. 5 mg/L [IQR 1–10], $$p \leq 0.012$$), but not lower albumin levels (median 38 vs. 39 g/L, $$p \leq 0.224$$). Accordingly, the CAR was higher in MIPSS70-intermediate patients (median 0.504 [$95\%$ CI 0.95–0.739] vs. 0.116 [$95\%$ CI 0.026–0.255,], $$p \leq 0.025$$). Given their low number, we did not include the MIPSS70-low risk group in this analysis. ## 3.2. Prognostic Impact of CRP, Albumin and Derived Indices (CAR and GPS) in MF The probability of death rose continuously with lower albumin levels even in the range determined as normal (OR = 0.85, $95\%$ CI 0.73–0.99; $$p \leq 0.043$$, Supplementary Figure S2), and an albumin concentration below the population median was associated with a significantly shorter survival (albumin </≥ 40 g/L, median OS 50 [$95\%$ CI 38–62] vs. 101 [$95\%$ CI 51–151] months, $$p \leq 0.026$$). CRP > 8 mg/L ($$n = 24$$) was associated with shorter survival compared to CRP within the normal limits (≤8 mg/L, $$n = 47$$): median OS 44 [$95\%$ CI 0–89] vs. 89 [$95\%$ CI 56–122] months, $p \leq 0.001.$ Correspondingly, a higher CAR was associated with inferior survival (median OS CAR ≤/> 0.204: 89 [$95\%$ CI 67–111] vs. 44 [$95\%$ CI 3–85] months, $$p \leq 0.001$$; and CAR ≤/> 0.374: 80 [$95\%$ CI 57–103] vs. 21 [$95\%$ CI 0–62] months, $p \leq 0.001$). Similar results were obtained for patients with a GPS of 1 or 2 ($$n = 18$$) compared to patients with a GPS of 0 ($$n = 39$$): median OS 32 [$95\%$ CI 1–63] vs. 89 [$95\%$ CI 65–113] months, $p \leq 0.001.$ Kaplan–Meier curves for the patients for whom both CRP and albumin were available ($$n = 57$$) are shown in Figure 1A–D. For all of the factors, a higher HR for mortality was observed in univariate Cox regression models (Table 2). Given the low number of MIPSS70-low-risk patients ($$n = 2$$), we dichotomized the cohort into a “MIPSS70dichlow/intermediate” risk group ($$n = 45$$) and a “MIPSSdichhigh” risk group ($$n = 14$$) for analyses in bivariate models. Here, CRP > 8 mg/L, albumin < 40 g/L, and both a CAR > 0.374 and a GPS > 0 retained their prognostic value together with MIPSS70dich, whereas a CAR > 0.204 did not (Table 2). In a separate analysis considering only the PMF patients ($$n = 35$$) and applying the same threshold for CAR (>0.374) and GPS (>0), the results remained significant, albeit with large $95\%$ confidence intervals (Table 3). Of note, for SMF, the very low number of cases ($$n = 12$$) for whom both CRP and albumin were available precluded a separate analysis. For MIPSS70-intermediate patients with both CRP and albumin available ($$n = 35$$), OS was significantly shorter for albumin < 40 g/L, CAR > 0.374 and GPS > 0, whereas CRP ≤/> 8 mg/L was not associated with an adverse prognosis (Figure 2A–D). ## 3.3. Association of Levels of CRP and Albumin with Inflammatory Cytokines Analysis of cohort B showed higher levels of hsCRP (median 10.07 vs. 7.02 mg/L; $p \leq 0.0004$) and lower levels of albumin (median 31.4 vs. 25.87 g/L; $$p \leq 0.0012$$) in MF versus MPN without fibrosis and/or the healthy controls. The VAF of the driver mutation was correlated only to levels of hsCRP ($$p \leq 0.008$$) (Supplementary Figures S3 and S4). The levels of interleukin-1β, interferon-γ, CCL17, I-TAC and ENA-78/CXCL-5 correlated positively with hsCRP, while no significant correlation was observed for IL-6, TNFα, IFNα, IL-8, IL-18, IL-10, IL-33, IL-17a, IL-23 and MCP-1 (Supplementary Figures S5 and S6). Albumin levels were inversely correlated to TNFα and MCP-1 (Supplementary Figures S7 and S8). ## 4. Discussion CRP and albumin resemble surrogate markers for the extent of inflammation, a key element of MPN pathophysiology ([1,15]). Higher CRP levels are known to be associated with shortened leukemia-free and overall survival in univariate analyses [5,6], whereas for albumin, a prognostic value independent of several DIPPS-based scoring systems has been described previously [7,8,9]. As expected, we therefore found a significant impact of both parameters on survival in our cohort. Levels of CRP were more closely related to the established adverse features of MF, which are in part or indirectly taken into account by current models, e.g., peripheral blasts, more severe anemia and/or transfusion-dependency or thrombocytopenia < 100 × 109/L, whereas only lower albumin levels were associated with a lower BMI as a measure of MF-induced cachexia. In addition, both factors were associated with levels of different cytokines, namely CRP with interleukin-1β, a driver of MF pathogenesis [16,17], and albumin with TNF-α, a key mediator of cachexia [18]. This implies that CRP and albumin probably reflect different aspects of MF pathophysiology. It is therefore of interest to combine them in the CAR or the GPS. For both parameters, a DIPSS-independent prognostic value has already been described in MF [10,11]. A recent report on acute myeloid leukemia patients not eligible for stem-cell transplantation illustrates that a combined assessment of CRP and albumin is of interest in myeloid malignancies in general [19]. We found a MIPSS70-independent prognostic value for both a CAR > 0.347 and GPS > 0. Hence, both parameters add prognostic information, even in the context of a molecular prognostic score. However, the relevant cut-off for the CAR used within our MIPSS70-based model was higher than that published for DIPSS-based prognostication [10]. This might be due to different composition of the patient populations, different access to potentially disease-modifying drugs such as ruxolitinib or the influence of age, which is part of the DIPSS but not the MIPSS70. Further studies are needed to define the optimal cut-off of the CAR to be used in the context of the single different scoring systems and/or to decide whether CAR or the GPS provides better prognostic information. Malnutrition and/or activation of catabolic pathways leading to hypoalbuminemia are probably not sufficient to explain the prognostic impact of albumin, since levels still in the lower range of normal represent an adverse risk factor not only in our cohort, but also according to all of the reports currently available on the prognostic role of albumin in MF [7,8,9]. Several pleiotropic effects of albumin have been described [20]. Amongst others, it represents the main anti-oxidant in the extracellular space [21], and higher levels could be associated with an increased capability to counteract ROS-mediated inflammation, which is linked to disease progression [22] in MF. This would indicate a vicious cycle, if inflammation has reached a point where albumin synthesis is limited. However, this hypothesis warrants confirmation in further studies. Considering albumin and CRP in clinical practice evidently helps to identify a more vulnerable population of MF patients who elude current prognostic models and could benefit from multimodal interventions. Both markers are associated with cardiovascular risk [23,24]; therefore, modifiable risk factors should be aggressively managed in MF patients with low albumin and elevated CRP levels and/or a higher CAR. The JAK2 inhibitor ruxolitinib controls not only constitutional symptoms and splenomegaly, but also lowers CRP levels and increases albumin concentration [25]. This may justify its use even in low-risk patients harboring one of the risk factors based on CRP and albumin, especially if splenomegaly is already present. Non-pharmacological interventions, such as physical exercise and nutritional interventions, can positively affect both parameters [26,27]. In this context, the Mediterranean *Diet is* currently under investigation in MF [28]. As this was a monocentric and retrospective study, the interpretation of our observations is subject to several limitations. Apart from a potential selection bias, the limited number of patients is most relevant, since it precludes defining the cut-off of the CAR that is best suited for prognostication or adjusting for possibly confounding factors such as age and treatment with disease-modifying drugs such as ruxolitinib. Due to the low number of patients, we had to combine cases of primary and secondary MF. Whether prognostic scores established for PMF are of value for patients with SMF is still a matter of debate [29,30], and the Myelofibrosis Secondary to PV and ET-Prognostic Model (MYSEC-PM) was developed especially for this population [31]. However, the MYSEC-PM does not consider the presence and type of additional non-driver mutations; hence, the MIPSS70 represents one of the currently suggested tools for prognostication in both PMF and SMF, if the mutational profile has to be considered [32]. A further limitation is the fact that conventional metaphase cytogenetics were available only for a minority of patients, precluding the assessment of the factors studied in the context of scoring systems, which consider chromosomal aberrations in addition to the mutational profile, such as the MIPSS70+ Version 2.0 [33]. ## 5. Conclusions Our data have shown for the first time that CAR and GPS add prognostic information independently of the MIPSS70-based molecular risk profile in MF. Albumin and CRP are easily available in clinical routine at low cost and represent potential biomarkers to faithfully identify a more vulnerable population of MF patients not identified by current prognostic model systems. Moreover, since CRP and albumin probably reflect different aspects of MF pathophysiology, including inflammation and metabolic aspects, combining both parameters seems particularly useful for MF prognostication. However, further studies involving multi-center registries with larger cohorts are necessary to validate the prognostic impact of albumin and CRP within the context of prognostic scoring systems considering both cytogenetics and the mutational status. In addition, it remains to be determined as to whether improving levels of CRP and albumin during therapy are associated with a better prognosis. Despite all limitations, our observations fit well into the emerging data and support the prognostic role of albumin and CRP in MF.
# HLA-B*57:01/Carbamazepine-10,11-Epoxide Association Triggers Upregulation of the NFκB and JAK/STAT Pathways ## Abstract Measure of drug-mediated immune reactions that are dependent on the patient’s genotype determine individual medication protocols. Despite extensive clinical trials prior to the license of a specific drug, certain patient-specific immune reactions cannot be reliably predicted. The need for acknowledgement of the actual proteomic state for selected individuals under drug administration becomes obvious. The well-established association between certain HLA molecules and drugs or their metabolites has been analyzed in recent years, yet the polymorphic nature of HLA makes a broad prediction unfeasible. Dependent on the patient’s genotype, carbamazepine (CBZ) hypersensitivities can cause diverse disease symptoms as maculopapular exanthema, drug reaction with eosinophilia and systemic symptoms or the more severe diseases Stevens-Johnson-Syndrome or toxic epidermal necrolysis. Not only the association between HLA-B*15:02 or HLA-A*31:01 but also between HLA-B*57:01 and CBZ administration could be demonstrated. This study aimed to illuminate the mechanism of HLA-B*57:01-mediated CBZ hypersensitivity by full proteome analysis. The main CBZ metabolite EPX introduced drastic proteomic alterations as the induction of inflammatory processes through the upstream kinase ERBB2 and the upregulation of NFκB and JAK/STAT pathway implying a pro-apoptotic, pro-necrotic shift in the cellular response. Anti-inflammatory pathways and associated effector proteins were downregulated. This disequilibrium of pro- and anti-inflammatory processes clearly explain fatal immune reactions following CBZ administration. ## 1. Introduction The approval of a medical product requires extensive and distinct clinical trials. Yet, the preselected group of volunteers who attend those clinical trials is limited. Every single person has a unique genetic profile affecting the functionality of any cell type of the immune system. It becomes obvious that drug-hypersensitivity reactions in most cases disorganize the adaptive immune system, resulting in severe cellular autoimmune reactions. In the past, these reactions resulted in the mandatory determination of distinct genetic profiles and at worst in the exclusion of patients from the desired medication. It is clear that these scenarios of hypersensitivity reactions following drug treatment represent an unpredictable challenge for the health care system. Adverse events occur when harmful symptoms arise after administration of a certain drug. If the harm is caused by application of the respective drug, the immunological reaction is termed an adverse drug event; if the drug was applied correctly at normal dosage, the reaction is termed an adverse drug reaction (ADR) [1,2,3,4,5,6]. ADRs usually occur in a dose-dependent and predictable manner and can be explained by the pharmacological toxicity of the drug [1,2,7]. Nevertheless, in $20\%$ of all ADRs, their occurrence seems idiosyncratic; those reactions are termed type B ADRs [1,2,8]. Yet, type B ADRs are often related to the immune system [1]. Since 2002, more and more type B ADRs have been described to be associated with the highly polymorphic human leukocyte antigen (HLA) molecules [9,10,11,12]. HLA molecules are cell surface proteins with a central function in immune surveillance. They present peptides to immune receptors of T and NK cells and, based on the origin of the presented peptide (i.e., self-peptide or pathogen-derived peptides), effector cell responses are prevented or induced [13,14]. The presentation of a diversity of peptides derived from different origins is unique in the ligand/receptor biology, since every peptide bound to an HLA molecule results in structural and biophysical alteration of the peptide-HLA complex. Therefore, it becomes clear that every subtle variation in the HLA molecule might facilitate binding and presentation of peptides that have not undergone selection by the thymus; the biological consequences are autoimmune-like reactions [15,16]. The anticonvulsant carbamazepine (CBZ) is widely used to treat various neurological diseases such as epilepsy, bipolar disorders or schizophrenia. However, CBZ administration can cause cutaneous type B ADRs in certain patients. These reactions have been described to be associated with the human leukocyte antigen (HLA) class I genotypes HLA-B*15:02 and HLA-A*31:01 [11,17]. Depending on the patient’s genotype, CBZ-induced ADRs are characterized by differential disease phenotypes. The symptoms range from mild skin rash such as maculopapular exanthema (MPE) and drug reaction with eosinophilia and systemic symptoms (DRESS) to more severe and potentially fatal Stevens-Johnson syndrome (SJS) and toxic epidermal necrolysis (TEN) [18,19]. It has been shown that the more severe SJS and TEN occur mainly in HLA-B*15:02+ patients, whereas MPE and DRESS following CBZ-treatment more likely arise in HLA-A*31:01+ patients [11,20]. Positive and negative predictive values indicate that the clinical picture of HLA-associated ADRs cannot be explained exclusively by the presence of a certain HLA allele [21], hence other factors have to be taken into account [22]. We could demonstrate that CBZ treatment in soluble HLA-A*31:01-expressing cells and EPX treatment in soluble HLA-B*15:02-expressing cells result in different alterations in the cellular proteome that might contribute to the explanation of distinct clinical pictures of the diseases [23]. Recently, a further association of CBZ-induced ADRs has been described. The allele HLA-B*57:01 is unambiguously associated with SJS/TEN following treatment with CBZ in Europeans [24]: *The analysis* included 28 European patients with CBZ-induced SJS, SJS/TEN-overlap or TEN, 11 of them were carrying HLA-B*57:01 ($39.29\%$), whereas the frequency of this allele was $6.69\%$ in *European* general population controls. The onset of SJS/TEN following drug application should be close-meshed monitored, an algorithm of drug causality (ALDEN) has been adjusted to provide safe diagnosis [25]. The allele HLA-B*57:01 is originally known to be strongly associated with hypersensitivity to the antiretroviral drug abacavir (ABC) [26,27]. ABC-induced ADRs for HLA-B*57:01+patients vary from fever, fatigue, gastrointestinal symptoms to severe multiorgan failure. For this disease pattern, autologous cytotoxic T cells that attack in a manner like an autoimmune reaction, the body itself could be verified to be responsible [28]. Illing et al. [ 29] could impressively show that ABC occupies the peptide binding region (PBR) of HLB-B*57:01 resulting in a conformational change of the HLA molecule and therefore, in CD8+-mediated foreign recognition of the self-HLA-B*57:01 molecules bound to a foreign peptide. Since then, this finding provides the gold standard for understanding HLA-mediated ADRs [29]. Patients with susceptible HLA variants have not been permitted to take certain drugs. However, more and more clinical studies recently have demonstrated that drug-tolerant patients exist [21]; namely, patients with a certain HLA type who still could receive the questionable drug even though no immunological reactions occurred. This seems difficult to believe since drug binding and, subsequently, loading of a different peptide repertoire into the peptide binding region of the respective HLA molecule should still occur. However, in some cases a slight alteration in the amino acid sequence of bound peptides is not sufficient to trigger T cell responses. This would lead to maintained T cell tolerance in certain patients [30,31]. These drug-tolerant patients could receive the respective drug regardless of their HLA type. Appreciation of this phenomenon can certainly take place by a real-time view on the proteomic content of cell with the susceptible HLA type and the respective drug. Modern proteomics provide deep insight into the health status, biological and functional opportunities of a cell, and would therefore provide a stage to monitor pharmacovigilance. The observation of an association between HLA-B*57:01 and CBZ-mediated ADRs is in this respect remarkable, since it further emphasizes that CBZ hypersensitivity seems to be associated with several HLA alleles that differ structurally. CBZ hypersensitivity was formerly an immunological reaction that targeted patients with HLA-A*31:01 or B*15:02 following drug administration. We were recently able to demonstrate why CBZ hypersensitivity features completely different clinical pictures depending on the HLA type. Although HLA-A*31:01 would bind CBZ, B*15:02 would preferably bind EPX [32]. Both small-molecule (drug)/protein (HLA) complexes would alter the HLA-specific peptidome by the occupation of the PBR, yet the T cell response manifested by the HLA-specific clinical picture would differ significantly. Clarification for the relation between HLA molecule and drug could in this case be delivered by complete proteome analysis [23]. The aim of this work is to give a first insight into the complex molecular basics of HLA-B*57:01-associated CBZ-mediated ADRs. This knowledge will contribute to a comprehensive understanding of the mechanisms of CBZ hypersensitivities that seem to represent disparate diseases. To achieve sufficiency in genetically based CBZ immune effects, we performed full proteome analysis of HLA-B*57:01 expressing cells in response to CBZ or EPX treatment. Understanding the pharmacological and biological basis of distinct genetic profiles and drug interplay will guide towards personalized and safe medication. ## 2.1. Detection of CBZ and EPX Bound to sHLA-B*57:01 Molecules The human B-lymphoblastoid cell line LCL721.221 (LGG promochem, Wesel, Germany) has been transduced with a lentiviral construct encoding for HLA-B*57:01 Exon 1–4, as previously described [33]. LCL721.221 cells expressing sHLA-B*57:01 molecules were cultured in RPMI 1640 (Lonza, Basel, Switzerland) supplemented with $10\%$ fetal calf serum (FCS, Lonza), 2 mM L-glutamine (c. c. pro, Oberdorla, Germany), 100 U/mL penicillin, and 100 µg/mL streptomycin (c. c. pro) at 37 °C and $5\%$ CO2 in the presence of 25 µg/mL CBZ or EPX and cell culture supernatants were collected twice a week. Affinity purification of sHLA-B*57:01 molecules post drug treatment was performed and protein concentration was calculated by Bicinchoninic Acid Assay (BCA) Protein Quantitation Kit (Interchim, San Diego, CA, USA). 150 ng purified drug-treated sHLA-B*57:01 molecules were applied to mass spectrometric drug quantification in solution as previously described [32]. ## 2.2. Detection of CBZ- or EPX-Induced Modifications of the LCL721.221/HLA-B*57:01 Proteome Proteome analysis was performed with 1 × 106 untreated, CBZ- or EPX-treated LCL721.221 and LCL721.221/sHLA-B*57:01 cells. Parental and sHLA-B*57:01-expressing LCL721.221 cells are not able to metabolize CBZ to EPX; this enables the analysis of CBZ and EPX treatment orthogonally. Cells were cultured in addition of 25 µg/mL CBZ or EPX for 48 h. After 24 h, drug addition was repeated. Cell harvest in RIPA lysis was performed as previously described [34] and calculation of protein concentration was performed by Bicinchoninic Acid Assay (BCA) Protein Quantitation Kit (Interchim, San Diego, CA, USA). Sample preparation, protein digestion and MS analysis was performed as previously described [23,35]. ## 2.3. Data Analysis The MaxQuant software (version 1.6.3.3; [36]) was used to search the obtained spectra against the Swiss-Prot reviewed UniProtKB database (version $\frac{01}{2021}$, 20,395 entries; [37]). Propionamid of cysteine was set as fixed modification and oxidation of methionine, N-terminal acetylation, deamidation of glutamine, and asparagine were set as variable modifications *The data* were processed with the Perseus software (version 1.6.2.3; [38]). In brief, proteins that resemble a possible contamination, only identified by sight or were reversed were excluded from further analysis as well as proteins that were not measured in all replicates. To exclude potential effects on protein abundance caused by transduction with sHLA-B*57:01, the proteome of untreated LCL721.221/sHLA-B*57:01 and parental LCL721.221 cells were subtracted from the corresponding CBZ- or EPX-treated cells. Visualization was performed with R [39]. In particular, the R packages complex heat map [40] and ggplot2 [41] were used. The heat map was generated by including the proteins that were positively tested in a Benjamini Hochberg FDR-based ANOVA. The Ingenuity Pathway Analysis software was used to perform an upstream analysis of significantly regulated proteins (IPA, QIAGEN Inc., https://www.qiagenbio-informatics.com/products/ingenuity-pathway-analysis (accessed on 24 November 2022)). Gene ontology analysis was performed with the GSEA software [42,43]. The mass spectrometry proteomics data were deposited to the ProteomeXchange *Consortium via* the PRIDE [44] partner repository with the dataset identifier PXD037502. ## 3.1. CBZ and EPX Bind to sHLA-B*57:01 To verify binding of CBZ or EPX to sHLA-B*57:01 molecules, sHLA-B*57:01 expressing cells were cultured in the presence of 25 µg/mL CBZ or EPX, and sHLA-B*57:01 containing cell culture supernatant was collected twice a week. Functional sHLA-B*57:01 molecules were affinity purified by an NHS column coupled to the mAb W$\frac{6}{32}$ and protein concentration was determined as previously described [45]. 150 ng CBZ- or EPX-treated sHLA-B*57:01 molecules were applied to UPLC-MS/MS analysis for detection of CBZ or EPX in solution [32]. CBZ as well as EPX could be verified to bind to sHLA-B*57:01 molecules. In the solution with 150 ng CBZ/sHLA-B*57:01 molecules 0.033 ng/mL CBZ could be detected and in the EPX-containing sHLA-B*57:01 solution 0.020 ng/mL EPX could be detected (Figure 1). ## 3.2. Quantitative Proteomic Analysis after CBZ and EPX Treatment The cellular proteomes of parental LCL721.221 cells and LCL721.221/sHLA-B*57:01 cells were analyzed in an LFQ-based approach. For comparison of CBZ or EPX treatment of HLA-B*57:01 expression and parental LCL721.221 cells, the proteomic content of untreated LCL721.221 and LCL721.221/sHLA-B*57:01 cells was subtracted from the drug-treated proteome abundances. In total, 4519 proteins could be identified. To exclude proteins that were induced through transduction with the HLA-B*57:01 allele, only proteins were included in the analysis that were measured in all replicates without imputation. After filtering, 2713 proteins were feasible for further research. By subtracting the untreated LCL721.221/sHLA-B*57:01 and parental LCL721.221 cells from the corresponding CBZ- or EPX-treated cells, possible effects on the proteome introduced by the transduction were excluded. The data were analyzed for their examinability through dimensionality reduction with t-SNE, and clustering of the different treatments confirmed that the data were feasible for further analysis. Distinct clustering also occurred in the heat map analysis (Figure 2). ## 3.3. EPX Treatment Induced a Strong Reaction in the Proteome of LCL721.221/sHLA-B*57:01 Cells CBZ treatment induced a significant change of abundance ($p \leq 0.05$) of 335 proteins with only 35 changes more than 2-fold in LCL721.221/sHLA-B*57:01 cells when compared to parental cells (Figure 3A). However, EPX treatment led to 776 significantly changed proteins and 134 proteins with a difference greater than 2-fold (Figure 3B). Furthermore, we found ten proteins showing an overlapped regulation between CBZ- and EPX-treated regulation with one protein being co-upregulated and nine proteins being co-downregulated (Figure 3A,B and Figure S2). An upstream analysis with the IPA software was performed to find central regulators responsible for the change in abundance. For CBZ treatment, the serine/threonine kinase IKBKE was suggested as the only upstream kinase that is activated (p-value 0.0457; Z-Score 2.0). At the same time, treatment with EPX led to the regulation of 14 kinases, with the receptor tyrosine kinase ERBB2 as the most activated, and the insulin receptor INSR predicted to be the most inactivated kinase. According to IPA upstream analysis, ERBB2 is responsible for the upregulation of IKBKB, MCM5, POLD2 and MCM7. In contrast, INSR leads to the downregulation of SLC39A7, MTCH2, ECI1, TOMM40 and LSS. Other activated indicated upstream regulators were part of the MAPK protein family or involved in the MAPK signaling cascade. In comparison, downregulated upstream regulators were predicted to be G Protein alpha, Rb, PRKAA, ERN1 and CDKN1A (Figure 3C). Further analysis of function classes of significantly regulated proteins via the IPA software showed that EPX treatment induced expression of proteins involved in “DNA Replication, Recombination, and Repair”. Furthermore, cell cycle-related proteins were found to be upregulated (Figure 4). Downregulated proteins were involved in “organismal death” and “glycogenesis”. CBZ treatment led to the downregulation of proteins involved in “organismal death”, “necroptosis”, and “cell death of epithelial cells” whereas proteins involved in “cell proliferation of Tumor cell lines” were upregulated. Global GSEA enrichment analysis showed enrichment (Enrichment score: 0.58) in protein expression involved in an inflammatory pathway (“HP_CHRONIC_OTITIS_MEDIA”) in LCL721.221/sHLA-B*57:01 cells that were treated with EPX compared to parental LCL721.221 cells treated with EPX. ELF4H, NCE1, STAT3, DNAAF5, RAZIB, NFKB1 were upregulated following EPX treatment in LCL721.221/sHLA-B*57:01 cells and were downregulated in parental cells after EPX treatment (Figure 5). EPX treatment induced the regulation of 14 pathways in the 25 most significant pathways predicted by the IPA software whereas CBZ treatment induced the regulation of 10 pathways. The most activated pathway after EPX treatment was predicted to be the “Necroptosis Signaling Pathway”, and “2-ketoglutarate Dehydrogenase Complex” was predicted to be most downregulated. CBZ treatment induced the most robust activation of the “G2/M DNA Damage Checkpoint Regulation” and inhibited “ELF2 Signaling” (Figure S1). ## 4. Discussion Recent studies have demonstrated that besides HLA-A*31:01 and HLA-B*15:02, HLA-B*57:01 is also strongly linked to CBZ-induced ADRs. Although CBZ administration in HLA-A*31:01+ patients causes diseases such as MPE and DRESS, CBZ administration in HLA-B*57:01+ Europeans resulted in SJS/TEN disease phenotypes [24] as observed for HLA-B*15:02+ patients [11]. SJS and TEN manifest severe life-threatening cutaneous and mucosal necrosis and have to be treated by specialized burns units [46]. When SJS/TEN emerge as HLA-mediated ADRs that involve T cell recognition of foreign peptide/HLA-complexes, the withdrawal of the drug should assure recovery of the clinical state. SJS/TEN is such an intense impairment of the affected skin that recovery is rarely possible. Therefore, prevention of such an adverse condition is mandatory in pharmacovigilance management strategies. Although the prophylaxis of HLA-mediated ADRs is not feasible and individual patient cases are often underreported due to the polymorphic character of HLA molecules, the need for conscientious analysis of HLA-mediated ADRs immediately following their establishment should be obvious. HLA molecules exhibit unique properties in the immune system. Host HLA molecules bind foreign antigens. This exceptional co-recognition requires exquisite specificity and genetical restriction for the host T cells [13]. HLA diversity and corresponding T cell diversity restrict a comprehensive screening of patient cohorts in phase I, II and III studies [47,48] where pharmacokinetics and pharmacodynamics prior to admission of a drug are tested. In the era of fast and sophisticated methods to view into the cellular content, proteomics are the instrument for understanding and long-term prevention of HLA-mediated ADRs. HLA-restricted peptidomics and T cell analysis deliver indisputable answers to understand immune compatibility, but in some cases the host T cells fail to recognize the presented peptide/drug/HLA ligand of host origin. Understanding indistinct intracellular activities as metabolism, cytokine expression, and downregulation of certain proteins in drug-tolerant patients would certainly be beneficial for drug-sensitive patients with a susceptible HLA type. Utilizing proteomics as a mirror into cellular events should support this objective. In the present study, we aimed to illuminate the underlying mechanism of HLA-B*57:01-mediated hypersensitivity to CBZ by full proteome analysis of CBZ- or EPX-treated LCL721.221/sHLA-B*57:01 cells. We chose the lymphoblastoid LCL721.221 cells, because these cells are not able to metabolize CBZ to EPX. The metabolization of CBZ to EPX occurs exclusively in hepatocytes and is catalyzed by cytochrome P450 enzymes [49]. Thus, the impact of CBZ and EPX treatment on the cellular proteome of LCL721.221 cells can be analyzed orthogonally. Prior to proteome analysis, the specificity of drug-HLA interaction was determined via UPLC-MS/MS analysis. The selection of CBZ or the metabolite EPX by the respective HLA molecule is decisive for the fate of the HLA-expressing cell as previously described [32]. We could previously demonstrate that CBZ binds exclusively to HLA-A*31:01, leading to severe skin lesions and that the exclusive interaction between EPX-HLA-B*15:02 and not CBZ-HLA-B*15:02 [32] led to life-threatening SJS/TEN diseases. The present study showed that HLA-mediated ADRs have to be meticulously analyzed to comprehensively understand their clinical outcome. In this paper, we can show that both drug conditions CBZ and EPX are able to engage with HLA-B*57:01. The main question occurs if both or one drug condition would, in cooperation with HLA-B*57:01, impact the cellular content of the antigen-presenting cells and possibly their microenvironment. Therefore, LCL721.221 cells have been transduced with sHLA-B*57:01 and exposed to the respective drug. LCL721.221 cells are not able to metabolize CBZ to EPX and are thus a meaningful instrument to answer the question. LCL721.221/sHLA-B*57:01 cells were treated with 25 µg/mL CBZ or EPX, respectively, and cell lysates were applied to full proteome analysis. By subtracting the proteomic changes that were introduced through transduction of the cells with the HLA-B*57:01 allele, we were able to observe the independent effects that occurred due to the interplay of CBZ or EPX with the HLA-B*57:01 molecule. We found that EPX treatment induced significant changes in the proteome of LCL721.221/sHLA-B*57:01 cells. In contrast, CBZ treatment resulted in minimal changes in the proteome of LCL721.221/sHLA-B*57:01 cells. CBZ treatment of LCL721.221/sHLA-B*57:01 cells led to only 35 significantly regulated proteins whereas EPX treatment of the cells resulted in 134 significantly regulated proteins. Only a slight overlap of ten significantly regulated proteins could be detected in both CBZ- and EPX-treated cells (Figure 3A and Figure S2). Upstream regulator analysis via IPA revealed just one activated upstream regulator, the serine/threonine kinase IKBKE, responsible for the change in protein abundance of CBZ-treated cells. In contrast, 14 kinases were detected as regulated in EPX-treated cells (Figure 3C). Although UPLC-MS/MS analysis revealed equal binding of CBZ and EPX to sHLA-B*57:01 molecules (Figure 1), the CBZ-induced changes of the cellular proteome of LCL721.221/sHLA-B*57:01 seem to be marginal when compared to EPX-induced changes. Following EPX treatment, the receptor tyrosine kinase ERBB2 could be described to be the most activated upstream regulator (Figure 3). ERBB2 is mainly involved in inflammatory and growth-associated processes [50]. Consequently, proteins that were predicted to be influenced and significantly two-fold upregulated were IKBKB, MCM5, MCM7, and POLD2. IKBKB is described to activate NFκB that is involved in inflammatory, pro-apoptotic and necrotic processes [51]. Additionally, NFκB has been found to be significantly enriched in the GSEA enrichment analysis in an overall inflammatory process (Figure 5). MCM5 and MCM7 are involved in DNA replication and are responsive to cytokine-induced gene transcription. MCM5 has been shown to be central for STAT1-mediated gene transcription [52]. In line with this, JAK1 has also been predicted to be activated (Figure 3). The JAK/STAT pathway plays a central role in reaction to external inflammatory stimuli [53]. Consistent with this finding, STAT5 upregulation has also been described for HLA-B*15:02 after EPX treatment [23]. The comparison of proteomic profiling of cells with intracellular small molecule/protein engagement [23] features clearly that EPX/HLA engagement triggers the upregulation of inflammatory pathways. The sudden upregulation of proteins that are described to be part of signal transduction pathways and thus triggers of autoimmune reactions through effector cell activation could not be predicted by conventional methods. We further describe the upregulation of POLD2, an enzyme that is involved in DNA repair processes and preserving DNA integrity [54]. POLD2 could recently be uncovered as a tumor suppressor [55] and prognostic biomarker in distinct cancers [56]. In coherence with POLD2 upregulation was the finding that more than 50 proteins involved in DNA repair, replication and recombination were regulated in response to EPX treatment and DNA regulatory processes were predicted to be activated (Figure 4). Moreover, FLT1 and EGFR were also indicated as activated. Both are known for their potential to induce apoptosis through either NFκB (FLT1) [57] or STAT3 (EGFR) activation [58]. FLT1 could be described as a therapeutic target in inflammatory events [59] whereas EGFR is known as a key regulator in cell division and cancer development [60]. In conclusion, the engagement of EPX/HLA-B*57:01 induces inflammatory, pro-apoptotic and necrotic processes in LCL721.221 cells when compared to parental LCL721.221 cells. These findings seem to be consistent with and might be a coherent explanation of the disease phenotype of SJS/TEN in HLA-B*57:01+ patients that is associated with keratinocyte death, cutaneous blistering and epidermal detachment [61]. In coherence with the upregulation of proinflammatory proteins, INSR could be observed to be inactivated after EPX treatment (Figure 3). INSR has been described as inhibiting inflammatory and cytokine-mediated processes when overexpressed [62]. These data illustrate the dignified intracellular cooperation between signal transduction proteins and the unpredictable interference of drug/HLA complexes. In addition, CDKN1A is predicted to be inhibited (Figure 3). Although CDKN1A is an inhibitor for cell proliferation in B cells, CDKN1A acts as an activator of proliferation and is closely regulated through Caspase-3 mediated degradation [63]. CDKN1A downregulation might suggest Caspase-3 activation and cleavage of CDKN1A. The “Necroptosis Signaling Pathway” was detected to be the most activated pathway following EPX treatment of LCL721.221/sHLA-B*57:01 cells. Cell death through necroptosis is a form of programmed necrosis that is mediated by pattern recognition receptors (PRR) and diverse cytokines. Necroptosis of cells results in the secretion of damage-related pattern molecules (DAMPs) and, subsequently, an inflammatory immune response [64,65]. Our observations indicate an unbalance of pro- and anti-inflammatory processes through up- or rather downregulation of certain proteins that might lead to an excessive immune reaction in the affected patients with the susceptible HLA allele that is mainly caused by EPX. Taken together, although EPX/HLA-B*57:01 cooperation introduced the described drastic changes in the proteome, alterations through CBZ/HLA-B*57:01 cooperation were only marginal. It seems obvious that, similar to CBZ-induced hypersensitivity in HLA-B*15:02+ patients, EPX is the main driver for the SJS/TEN phenotype in HLA-B*57:01 patients. The engagement of EPX and the HLA molecule seems to perturb the intracellular processing of healthy cells and produces a stress response resulting in DNA damage and consequently, extensive inflammation. The possibility to study the effect of EPX/HLA and CBZ/HLA orthogonally in the present study offers the potential to appreciate the different clinical outcomes of HLA-mediated CBZ hypersensitivity. The metabolization of CBZ to EPX occurs in the cytochrom P450 system in the liver. Although CBZ is metabolized to EPX, the balance between both drugs shifts towards an excess of EPX, the inflammatory life-threatening condition of concerned patients therefore becomes more severe and might shift from the initiation of SJS to TEN. TEN is a serious and fatal condition for which the outcome is in >$50\%$ of affected patients lethal or at least leads to incurable long-term harm. To embed these findings into the biological context of systemic inflammation, the key processes that might drive the hypersensitivity reaction are pathways that were found to be upregulated, indicating a beginning of cell death in HLA-B*57:01 transduced cells, for example the strong activation of necroptosis pathway after EPX treatment (Figure S1). A recent study of the hypersensitivity reaction in HLA-B*15:02+ patients revealed that the presence of CD4+CD25+CD127loCD39+ Treg that can reduce the presence of extracellular ATP by degrading it via CD39 and CD73 to adenosine determines the conversion from a non-responder to a responder [66]. By taking this study into account, we hypothesize that releasing intracellular ATP into the extracellular matrix facilitated by inflammatory processes induced by EPX is the initial step towards a systemic inflammation when not enough CD4+CD25+CD127loCD39+ Treg are present to reduce the effect of extracellular ATP and subsequent IFNγ production. ## 5. Conclusions The future of pharmacological appreciation of drug and medical safety relies on the comprehension of the functional consequences of individual immunogenetics.
# Decreasing the Crystallinity and Degree of Polymerization of Cellulose Increases Its Susceptibility to Enzymatic Hydrolysis and Fermentation by Colon Microbiota ## Abstract Cellulose can be isolated from various raw materials and agricultural side streams and might help to reduce the dietary fiber gap in our diets. However, the physiological benefits of cellulose upon ingestion are limited beyond providing fecal bulk. It is barely fermented by the microbiota in the human colon due to its crystalline character and high degree of polymerization. These properties make cellulose inaccessible to microbial cellulolytic enzymes in the colon. In this study, amorphized and depolymerized cellulose samples with an average degree of polymerization of less than 100 anhydroglucose units and a crystallinity index below $30\%$ were made from microcrystalline cellulose using mechanical treatment and acid hydrolysis. This amorphized and depolymerized cellulose showed enhanced digestibility by a cellulase enzyme blend. Furthermore, the samples were fermented more extensively in batch fermentations using pooled human fecal microbiota, with minimal fermentation degrees up to $45\%$ and a more than eight-fold increase in short-chain fatty acid production. While this enhanced fermentation turned out to be highly dependent on the microbial composition of the fecal pool, the potential of engineering cellulose properties to increased physiological benefit was demonstrated. ## 1. Introduction Cellulose is the most abundant renewable material in nature, being the primary building block of the plant cell wall. It consists of long unbranched β-1,4-bound glucose polymers organized in long crystalline fibers with strong interactions between the different polymers [1]. The high degree of crystallinity, high degree of polymerization and low specific surface give cellulose a very recalcitrant character. This recalcitrance is important in the plant cell wall, since cellulose provides the plant with mechanical strength and resilience against breakdown, but is a major drawback in valorization strategies of (ligno)cellulose, e.g., in the context of biorefineries [1,2]. Furthermore, cellulose acts as an insoluble and recalcitrant dietary fiber in the human body when ingested. The use of cellulose as dietary fiber in foods could be very relevant since the food industry increasingly searches for dietary fiber enrichment strategies. While a sufficient daily intake of dietary fiber is correlated with various health benefits, such as a decreased risk of colorectal cancer, obesity, cardiovascular diseases and diabetes mellitus type II [3,4,5,6], the average daily intake of dietary fiber is too low in Western diets [7]. Within dietary fiber fortification strategies, specific attention goes to (partially) fermentable dietary fiber. Fermentation of dietary fiber in the colon is correlated with different additional physiological benefits, linked to the production of short-chain fatty acids (SCFA), which are essential for colonic health h, glucose and cholesterol homeostasis and the regulation of the appetite [8,9,10]. The fermentability of cellulose in the human colon is very low, however [11]. Cellulolytic enzymes are produced by Ruminococcus, Enterococcus, Bacteroides or Prevotella species in the colon [12,13], but the highly ordered nature of cellulose limits the accessibility of the cellulosic fibers and the glucosidic β-1,4 bonds for enzymatic breakdown and results in very limited fermentability. We can assume that the fermentability and the physiological benefits of cellulose with it could be improved by breaking this recalcitrance before ingestion. Such accessible cellulose would remain insoluble and indigestible but could be fermented to a greater extent in the colon. Previous research already stated that the fermentability of cellulose depends on its physical appearance [14], and some attempts to improve cellulose fermentability by reducing the particle size were already successful in human in vitro experiments [15,16]. However, the impact of the degree of polymerization and crystallinity has not been investigated in this context. At the same time, these structural parameters are known to affect cellulose accessibility greatly [17,18,19]. Plenty of (ligno)cellulosic biomass pretreatment protocols, such as milling, irradiation, ultrasonication, hydrothermal treatment or solubilization in ionic liquids, have been developed and optimized to alter these cellulose characteristics [17,20,21,22,23,24,25]. Moreover, several of them, such as ball milling, acid hydrolysis or solubilization in ionic liquid, are linked to an improved cellulose enzymatic accessibility as well [26,27,28]. In this study, two effective pretreatment methods, ball milling and acid hydrolysis, are combined to make cellulose with a lowered degree of polymerization, a lowered degree of crystallinity and the combination of both. These samples are used to gain insight into the effect of these parameters on the enzyme accessibility of cellulose and its fermentability by colon microbiota, using batch in vitro fermentations. ## 2.1. Materials Microcrystalline cellulose (Avicel PH-101, $3.4\%$ moisture), citric acid (analytical grade), the Cellic CTec2 cellulase enzyme blend and all other analytical chemicals and solvents were purchased from Sigma-Aldrich (Deurne, Belgium). ## 2.2. Production of Dietary Fiber Samples Starting from Microcrystalline Cellulose An overview of the production of the different dietary fiber samples is given in Figure 1 and Table A1. Microcrystalline cellulose (MC) was first depolymerized using a ball milling step and acid hydrolysis, similar to our previous work [29]. MC was pretreated in a planetary ball mill (PM100, Retsch GmbH, Haan, Germany) in batches of 20 g with 6 zirconium oxide balls (Ø 10 mm) to induce para-crystalline zones in the cellulose fiber. These ball-milled cellulose fibers are called amorphized cellulose (AC). Milling time (60–360 min) and speed (400–500 rpm) were varied. Afterwards, the paracrystalline zones in the AC fibers were hydrolyzed with a $10\%$ citric acid solution in water. Hydrolysis time (2–6 h) and temperature (90–130 °C) were varied (Table A1). The depolymerized insoluble cellulose samples were washed until neutral pH and dried for 45 h at 60 °C, yielding depolymerized cellulose (DC). After being dried, the DC sample was again treated in the planetary ball mill with 6 zirconium oxide balls (Ø 10 mm) at 500 rpm to produce amorphized depolymerized cellulose (ADC). Treatment times of 30, 60 and 360 min were used. All samples and respective production parameters are summarized in Table A1. ## 2.3. Characterization of Dietary Fiber Samples The average degree of polymerization (avDP) of cellulose was determined viscometrically in triplicate, based on the method of the French Institute for Normalisation [30]. Fiber samples (0.075 g) were dissolved in a 0.5 M bis(ethylenediamine)copper(II)hydroxide solution (15 mL), and the viscosity of this solution at 25 °C was measured with a capillary viscometer, type nr. 509 04 (Schott Geräte, Jena, Germany). The avDP was calculated from the boundary viscosity of the solution (η), based on the empirical relation: Average DP^α = η/K, with α and K empirical constants, equal to 1 and 7.5 × 10−3, respectively. The boundary viscosity η was determined from ηa = η. C.10^((0,14.η. C)), with ηa the specific viscosity of the solution, and C the cellulose concentration (g/mL). The crystallinity of the fiber samples was determined by X-ray powder diffraction (XRD) on a high-throughput STOE STADI P Combi diffractometer (STOE & Cie GmbH, Darmstadt, Germany) in transition mode with Ge[111] monochromatic X-ray inlet beams (λ = 1.5406 Å, Cu Kα source). Crystallinity indexes were determined by the peak-height method of Segal and coworkers [31]. ## 2.4. Enzymatic Digestibility Analysis The enzymatic digestibility of the dietary fiber samples was determined by calculating the enzymatic conversion (EC) after incubating samples with the Cellic CTec2 cellulase enzyme blend, as described by Chen and coworkers [32]. Cellulose was suspended ($1.0\%$ w/v) in a 50 mM sodium acetate buffer (pH 4.8) with 20 U Cellic CTec2 cellulase enzyme blend per gram cellulose and stirred at 900 rpm. After 1 h of incubation at 40 °C, the enzymes were denatured by heating the solution (5 min, 110 °C). The solid fraction was separated from the supernatant by centrifuging at 5000 g. The amount of glucose and cellobiose in the supernatant from cellulose hydrolysis was determined by high-performance-anion-exchange chromatography with pulsed amperometric detection (HPAEC-PAD) on a Dionex ICS3000 chromatography system (Sunnyvale, CA, USA). Saccharides were separated on a Dionex CarboPac PA-100 column (4 × 250 mm), equilibrated with 90 mM NaOH. The enzymatic conversion was calculated from the amount of glucose (mg) and cellobiose (mcb) in the supernatant, and the amount of starting substrate (mc):[1]EC=mg+mcb1.1mc ## 2.5. In Vitro Fermentation of Dietary Fiber Samples Using Human Fecal Inoculum In vitro fermentation experiments (trial 1, 2 and 3) were performed as described by De Preter et al. [ 33]. Fresh fecal samples of 8 healthy donors (consuming a mixed western diet, no history of antibiotic use in the last six months) were collected and pooled to make a 10 w/v% fecal slurry in phosphate-buffered saline. After intensive shaking, this fecal slurry was decanted, and the supernatant (referred to as the inoculum) was added to different fiber samples (25 mL to 100 mg cellulose) in triplicate. After being flushed with nitrogen gas, the tubes were incubated anaerobically for 48 h in a shaking water bath at 37 °C. At the end of incubation, the pH of the slurry was measured with a digital pH meter (Hanna Instruments HI 9025, Temse, Belgium). Aliquots were stored at −20 °C for the determination of short-chain fatty acid (SCFA) concentration and microbial analysis. ## 2.6. Short-Chain Fatty Acid Analysis The amounts of acetate, propionate and butyrate in the fecal inoculum were determined according to the gas-chromatographic method described by Bautil et al. [ 34]. In this procedure, a $25\%$ (w/v) NaOH solution was added to the inoculum to create sodium salts of the SCFA, which were neutralized by adding a $50\%$ sulfuric acid solution afterwards. These salts were extracted to a diethyl ether phase, which was analyzed with an Agilent 6890 Series gas chromatograph with an EC-1000 Econo-Cap column (25 m × 0.53 mm, 130 °C, 1.2 μm film thickness) and helium (20 mL/min) as carrier gas. A flame ionization detector at 195 °C measured the different fatty acids. Within this analysis, 2-ethyl butyric acid was used as an internal standard. ## 2.7. Microbial Analysis Microbial profiling was done as described by Falony et al. [ 35]. Nucleic acids were extracted from the aliquots using the RNeasy PowerMicrobiome kit (Qiagen, Venlo, The Netherlands). The manufacturer’s protocol was modified by adding a heating step at 90°C for 10 min and excluding DNA removal steps. Afterwards, the extracted DNA was amplified in triplicate using 16S primers 515F (59-GTGYCAGCMGCCGCGGTAA-39) and 806R (59-GGACTACNVGGGTWTCTAAT-39) targeting the V4 region. Deep sequencing was performed on a MiSeq platform (2-by-250 paired-end [PE] reads; Illumina, San Diego, CA, USA). Initial quality assessment, sequence filtering and trimming of the FASTQ files were carried out using the FASTQC software (version 0.11.9) and the ‘filterAndTrim’ function of the DADA2 algorithm pipeline package. Analysis thereafter was performed using the ‘mergePairs’ function of the DADA2 package, which merges the forward and reverse sequences. Any chimeric sequences which were produced during aberrant PCR annealing were identified and removed. Taxonomy was assigned to the sequences using a naïve Bayesian classifier method with the SILVA database (version 138.1) as a reference. ## 2.8. Statistics Significant differences were detected by performing a one-way analysis of variance (ANOVA) using JMP Pro 16 (SAS institute), with a comparison of the mean values using the Tukey test (α < 0.05). ## 3.1. Production of Samples with Different DP and Crystallinity from Microcrystalline Cellulose To investigate the impact of crystallinity and avDP on the enzymatic accessibility and fermentability by colon microbiota, a modification protocol using the combination of planetary ball mill treatments and acid hydrolysis was used (Figure 1). First, MC was treated in a ball mill to decrease the crystallinity of the cellulose by incorporation of paracrystalline zones. This decrease in crystallinity impacts the levelling-off degree of polymerization (LODP) of the cellulose, which represents the length of crystalline polymers that remain insoluble after a fast hydrolysis of the easily accessible paracrystalline zones [36]. Second, the ball-milled, or amorphized cellulose (AC) was hydrolyzed with citric acid at elevated temperature (90–130 °C) to hydrolyse the polymers in the paracrystalline zones. After this hydrolysis, depolymerized cellulose (DC) with a decreased avDP is obtained. At last, this DC was treated in the planetary ball mill another time for 30–360 min to produce amorphized depolymerized cellulose (ADC), which is expected to be highly accessible. All samples are listed in Table A1. The influence of processing conditions (ball mill speed/time and acid hydrolysis time/temperature) on cellulose properties was extensively investigated in our previous study for the production of DC [29]. The avDP of the DC fibers can be finely tuned, and also the crystallinity of the DC fibers can be controlled by applying varying process parameters. In short, when the acid hydrolysis is not performed for long enough to hydrolyse all the paracrystalline zones of the AC fibers, the LODP will not be reached and the crystallinity of the resulting DC fibers will remain low [29]. Despite the extensive investigation of the impact of process parameters to produce DC, the impact of the second ball mill treatment on this DC was not yet investigated. For a sample with relatively high crystallinity (DC with avDP of 32 AGU and crystallinity index of 0.62), the effect of this additional ball mill treatment on avDP and crystallinity is shown in Figure 2. Figure 2a shows that the peaks from crystalline planes in the refractogram of the DC fibers indeed disappeared due to the ball mill treatment. Milling a DC for only 15 min already disrupted most of the crystalline structure, but the crystalline reflection at 2θ of 22° was still more prominent in these refractograms than in those of ADC fibers with longer milling times. After 30 min of milling of the DC at 500 rpm, an amorphous refractogram was detected, of which the shape did not change anymore upon longer milling times. Previously, it was shown that the crystallinity decrease during ball milling of unmodified MC was limited during the first 30 min of the milling process [29]. The breakdown of crystallites, therefore, occurred more slowly for unmodified MC than for this DC (Figure 2a). This faster decrease in crystallinity for the DC might be due to the different type of crystallites that need to be broken down. As visualized in Figure A1, $32\%$ of the crystallites in DC fibers were cellulose II polymorphs, while it is known that no cellulose II is present in unmodified MC [37]. We can hypothesize that cellulose II crystallites, formed during the first ball mill treatment and hydrolysis [38], are easier to decrystallize than cellulose I crystallites. The faster decrystallization of DC can also be caused by the lower avDP of the DC fibers. Depolymerization of the DC fibers does not seem to occur during the ball mill treatment since no significant decrease in avDP was detected for the different ADC fibers (Figure 2b). Previous research stated that a ball mill treatment could not depolymerise cellulose shorter than 50 AGU [29]. This theory seems to be confirmed here since no depolymerization of the DC fibers (DP 32) occurred. ## 3.2. Influence of Cellulose Structural Properties on Enzymatic Accessibility Figure 3 shows the enzymatic conversion of the modified cellulose into glucose or cellobiose after 1 h reaction with the commercial Cellic CTec2 enzyme blend under optimal conditions. Unmodified MC is compared with amorphized MC (AC124), DC and ADC with different avDP (Table A1). Only $30\%$ of the long crystalline MC was converted into glucose and cellobiose by the cellulase blend within one hour (conversion degree of 0.30 ± 0.04). Decreasing the crystallinity by ball milling (260 min) improved the conversion degree slightly to 0.35 ± 0.01, but decreasing the avDP had the opposite effect. Surprisingly, the DC was all less accessible for the enzyme blend than unmodified MC or AC, while Kumar and Wyman showed that a shorter DP results in higher accessibility [39]. We can hypothesize that a decrease in avDP from 168 to 28 AGU is not sufficient to compensate for the removal of para-crystalline zones and the presence of cellulose polymorph II in the DC fibers, two structural properties that lower enzymatic accessibility. This hypothesis can be confirmed by the positive association between avDP and enzymatic digestibility of the different DC samples. These various DC samples also slightly differed in crystallinity: the crystallinity of DC104 was lower than the crystallinity of DC28, since the LODP was not reached for the longer DC fibers (Table A1). This is because the mildest hydrolysis conditions were used for making DC104, resulting in the remaining of some easily accessible paracrystalline zones after drying. It seems that these small differences in crystallinity have a more significant impact on enzymatic conversion than the differences in avDP. Since the DC samples showed lower enzymatic digestibility for the cellulase blend than MC or AC, it can be concluded that a DP decrease to values lower than 100 AGU is not of interest to increase the enzymatic accessibility of cellulose. However, this DP decrease pays off once the short cellulose is made amorphous again in the ball mill. ADC with an avDP of 28 AGU had a conversion degree after 1 h of 0.52 ± 0.07, higher than the AC sample. Furthermore, there seems to be a negative correlation between the avDP and enzymatic digestibility for these amorphous samples. Even within the small DP range of 20 to 110 AGU, shortening the cellulose avDP enhances its enzymatic digestibility once a low crystallinity is assured. ## 3.3. Effect of Enhanced Accessibility of Cellulose on Fermentation in the Human Colon A correlation between the enzymatic accessibility of cellulose samples for the Cellic CTec2 enzyme blend and the fermentability by colon microbiota can be expected since the fermentation of complex carbohydrates starts with hydrolysis by excreted microbial hydrolytic enzymes as well [40]. The behaviour of the fiber samples in the human large intestine was evaluated in three independent batch fermentation experiments using fecal inocula. In Figure 4, the production of linear SCFA and the pH evolution during each fermentation experiment are shown. In these experiments, MC, AC, DC and ADC with different avDP were added to the fecal inocula (Table A1). In trial one (Figure 4a,b), only a limited amount of linear SCFA was produced in the fecal inoculum without cellulose addition (blank) during the incubation time of 24 h. Adding dietary fiber samples to the inoculum, however, resulted in enhanced production of SCFA during incubation. The majority of SCFA was only produced after the first 8 h of incubation had passed. As described by Mikkelsen et al., cellulose fermentation in a batch in vitro system is slow compared to other readily fermentable carbohydrates, such as arabinoxylans and glucans [14]. In this secondary fermentation phase, it became clear that only a limited amount of MC was fermented within 24 h. During the incubation of MC, the linear SCFA concentration only increased from 10.83 ± 2.63 mmol/L to 19.99 ± 0.71 mmol/L. Breaking cellulose crystallinity by ball milling increased the fermentability already slightly. The average SCFA production after 24 h from the AC sample was 0.57 times higher than the SCFA production from MC. Decreasing the avDP of cellulose was a more effective way to improve the accessibility of cellulose for the gut microbiota: the linear SCFA concentration produced by fermentation of DC with DP 59 AGU and 32 AGU was 2.6 and 1.8 times higher compared to unmodified MC. Contrary to the breakdown by the CTec2 enzyme blend, the microbiota in this pooled inoculum could access the DC better than the AC. Furthermore, the slightly lower crystallinity of DC59 resulted in a slightly higher fermentation degree for DC59 than for DC32. The highest SCFA production, however, was obtained upon the addition of ADC to the fecal pool, with a linear SCFA concentration of 41.5 ± 6.4 mmol/L at the end of incubation. By reducing both the degree of polymerization and crystallinity of MC, the formation of linear SCFA by fermentation could be multiplied by a factor of 4.2. Based on the difference in mass of linear SCFA between the blank and ADC-enriched inoculum at 48 h, a minimal degree of fermentation (MDOF) of 45.8 ± $10.9\%$ could be derived for the ADC25 sample, while this was only 7.6 ± $0.9\%$ for MC. This MDOF is an underestimation of the actual fermentability since it only takes into account the mass of linear SCFA as a fermentation product. Furthermore, adding ADC to the fecal pool resulted in the largest pH drop, from 6.57 ± 0.01 to 5.67 ± 0.08 (Figure 4b). In vivo, such a pH drop could be associated with different physiological benefits, such as the repression of pathogen growth and proteolytic fermentation [9]. In a second in vitro fermentation experiment, two different chain lengths of ADC were investigated (Figure 4c,d). Additionally, the ball mill posttreatment time for the ADC was reduced to 1 h, instead of 6 h. Although a different pool of human feces was used, the same trends could be observed for the fermentability of these modified celluloses: unmodified MC was only fermented to a minimal extent, while decreasing the DP of the cellulose resulted in higher production of linear SCFA from the cellulose, up to a factor of 5.4 for DC32 after 48 h. The highest linear SCFA production was found for ADC samples ADC27 and ADC37 (8.2 and 8.4 times higher than for MC, respectively). The small difference in avDP, 37 versus 27, did not induce a significant difference in the fermentability of the ADC sample. The ADC samples were fermented to at least 42.6 ± $3.6\%$, while the MDOF of MC was only 5.7 ± $0.2\%$. The enhanced fermentation resulted in a larger pH drop of the ADC37-enriched inoculum (pH 5.58) than the MC-enriched inoculum (pH 6.02) (Figure 4d). Furthermore, it was demonstrated in this trial that this decreased pH resulted in a lowered production of branched SCFA as well (Figure A2). MC addition reduced the relative amount of branched SCFA from $8.5\%$ to $7.9\%$, but the addition of ADC caused a further decrease to $6.0\%$. This is the first indication of a lowered protein fermentation in the inocula. Detailed analysis of the acetate, butyrate and propionate concentrations demonstrate that the relative amounts of butyrate and propionate also increased after 48 h upon the addition of ADC. The relative amount of butyrate in total linear SCFA was $13.1\%$ for the blank fecal slurry, while this was $17.6\%$ for the fecal slurry with ADC37 addition (Figure A2). This enhanced butyrate production suggests an additional physiological benefit since enhanced butyrate production is linked to a lower risk of colon inflammation and cancer [41]. In this second trial, DC and ADC samples were mainly fermented between 24 and 48 h, while the main cellulose fermentation happened between 8 and 24 h in the first trial. The presence or absence of easily accessible fibers in the starting inoculum of the trials partly explains this difference. This was hypothesized since a fast production of linear SCFA after 4 h was observed in the second trial, while it was absent in the first trial. Therefore, the microbial community in the inoculum needed more time in this trial to switch to cellulolytic fermentation metabolism than in the absence of other (more easily fermentable) carbohydrate fibers in the first trial. A third trial (Figure 4e,f) showed a different fermentability behaviour for the ADC. No significant differences compared to the fermentability of unmodified MC were observed, and the pH decrease during the experiment was very limited for both the MC- and ADC-enriched inocula. Next to this trial, two other repetitions showed similar behaviour with no cellulose fermentation occurring for MC or ADC (data not shown). The starting microbial composition of the inoculum within every trial is different, of course, since other donors were used for the experiments. In Figure 5, the composition of the microbiome of the three in vitro trials is given at the genus level at the starting point of the experiment and after incubation with ADC. For in vitro fermentation trials 1 and 2, the microbiome composition was dominated by Bifidobacterium and Blautia species. The proportion of Bifidobacterium species was lower for the first trial than for the second. Surprisingly, these Bifidobacteria seemed to dominate the cellulose fermentation in this first trial. After 24 h, when the ADC fermentation had already taken place, the DNA proportion from Bifidobacterium species increased from $10.7\%$ to $37.0\%$, while other genera seemed to be suppressed. The fermentation of ADC might be driven by Bifidobacteria, but this enrichment can also be the result of the presence of glucose, which is released from cellulose by others. In the second trial, a different evolution of the microbial community was observed. While the microbial community was also enriched in Bifidobacteria at 24 h (when the ADC was not fermented yet), Ruminococcus species took the upper hand between 24 h and 48 h, which is the period that the ADC fermentation took place. The microbiota in the third trial did not switch to a cellulolytic metabolism within the given time frame of 48 h. The microbial composition of the starting pool of this experiment was clearly less dominated by the Bifidobacterium and Blautia species than the ones from Trial 1 and Trial 2. Furthermore, during the experiment, Bacteroides dominated the medium instead of Bifidobacterium or Ruminococcus. Consequently, we can hypothesize that the switch to cellulose fermentation occurs only if specific specialized microorganisms are present, and the composition in the pool allows them to take the upper hand. Based on Trial 1 and 2, the authors hypothesize that this switch depends on the presence and abundance of specific Ruminococcus or Bifidobacterium species, but further research is needed to confirm this statement. Despite the comparable starting microbial community of those two trials, the fermentation of both ADC samples caused an enrichment of different microorganisms, demonstrating the complexity of this fermentation process. ## 4. Conclusions The combination of ball milling with acid hydrolysis was demonstrated to be a valuable strategy for increasing the enzymatic accessibility of microcrystalline cellulose since it can selectively decrease the avDP and crystallinity of cellulose simultaneously. These modifications effectively resulted in an enhanced digestibility by a commercially available cellulase blend. Within an avDP range of 20–110 AGU, the avDP impacted the hydrolysability by this enzyme blend, once a low crystallinity was ensured. Furthermore, the enhanced accessibility of such amorphized depolymerized cellulose resulted in a higher fermentation degree compared to unmodified cellulose upon incubation with a pooled fecal inoculum from human subjects. With this modification, the minimal degree of fermentability of cellulose (based on the mass of SCFA produced from cellulose) within 48 h could be enhanced from $5\%$ to $45\%$. This could be observed in two independent studies. However, other efforts did not show this enhanced cellulose fermentation. Microbial analyses of the fecal inocula revealed the complexity of cellulose fermentation in batch systems. Performing a detailed analysis of the cellulose fermentation metabolism in the human colon is, therefore, key to fully revealing the effect of DP and crystallinity of cellulose on fermentation in batch conditions. Until this is investigated, the authors would like to stress that the interpretation of in vitro fermentation results always has to be performed with caution, and total characterization of the microbial pool is always encouraged. However, we can conclude that engineering the properties of cellulose to high accessibility can improve the fermentation in the colon as well, be it under specific circumstances. With this work, the first step is taken towards a highly functional cellulose-type dietary fiber additive. ## 5. Patents The use of amorphized depolymerized cellulose as partially fermentable dietary fiber is patented in EP$\frac{2022}{0784403}$ (not published).
# Simplifying carb counting: A randomized controlled study – Feasibility and efficacy of an individualized, simple, patient‐centred carb counting tool ## Abstract The Scc tool is a simple tool enabling accurate insulin dosing to all diabetes patients treated with basal and bolus insulin. The SCC tool has the potential to apply to all diabetes patients, particular to those who are uncomfortable with the use of advanced technology, or who do not have access to such technology due to age, education or language barriers. ### Introduction The purpose of this study was to introduce and test a simple, individualized carbohydrate counting tool designed for persons with Type 1 Diabetes Mellitus (T1DM) in order to determine whether the tool improved A1C levels for participants with age, education or language barriers. ### Methods In a randomized controlled trial, 85 participants were offered six diabetes instructional sessions free of charge over a six‐month period. Forty‐one received guidance using the regular carbohydrate counting (RCC) method. Forty‐four received guidance using an individualized ‘Simple Carb Counting’ (SCC), involving two customized tables prepared for participants. ### Results The simple, individualized SCC tool for carbohydrate counting was non‐inferior to the standard method of RCC. The SCC tool was more effective among participants aged 40 and older, while no differences were found when comparing participants by education level. Irrespective of intervention group, all participants improved their A1C level ($9.9\%$ = 13.2 mmol/L vs $8.6\%$ = 11.1 mmol/L, $$p \leq .001$$). A greater improvement in A1C level was seen in newly diagnosed participants (−6.1 vs −0.7, $$p \leq .005$$, −3.4 vs 0.9, $$p \leq .032$$) in both the RCC and SCC groups. All participants expressed improved emotional level per their PAID5 questionnaires (Problem Areas in Diabetes Scale‐PAID), (10.6 (±5.7) vs 9.5 (±5.7), $$p \leq .023$$), with women reporting greater improvement than men. ### Conclusions SCC is a simple, individualized, feasible, low‐tech tool for carbohydrate counting, which promotes and enables accurate insulin dosing in people with T1DM. It was found more effective among participants aged 40 and older. Additional studies are needed to corroborate these findings. ## INTRODUCTION Achieving glycaemic control in Type 1 diabetes mellitus (T1DM) is vital in order to reduce diabetes‐related complications. 1, 2 Treatment recommendations include multiple daily injections of prandial and basal insulin, or continuous insulin infusion (CSII). As carbohydrates are the major nutrient affecting the post‐prandial response, it is important to educate individuals on matching prandial insulin doses to carbohydrate intake by means of carbohydrate counting (CC). The checking of pre‐meal blood glucose levels and co‐ordination of anticipated physical activity 3 are also recommended for optimizing glucose control. CC takes into account the carbohydrate intake per meal, and enables adjustment of the prandial insulin dosage necessary to achieve individual post‐prandial glucose targets. 4 To manage CC, the patient has many variables to consider: (i) the personal insulin/carb ratio (I:C) – the amount of insulin the patient needs to overcome a 15‐gram portion of carbohydrates; (ii) the accurate assessment of the amount of carbohydrates consumed; (iii) the insulin sensitivity response (IS), that is, the decrease in blood glucose after injecting one unit of insulin to counter an elevated glucose level; (iv) the individual target glucose level pre‐ and post‐meal; and (v) the level of glucose prior to the meal. This means that in order to normalize post‐prandial glucose levels, individuals need to be highly motivated to maintain wellbeing through on‐going compliant conduct, and to be possessed of high mathematical and literacy skills. Studies 5 have shown the difficulties of successfully carrying out CC, especially concerning the over‐ and underestimation of carbohydrate content in various foods that can lead to post prandial hypo or hyperglycaemia respectively. In a systematic review 6 of CC efficacy in managing T1DM, Bell et al 6 found only partial benefit in achieving glycaemic control through CC, notwithstanding the difficulty of complying with CC instructions. These difficulties are added to other barriers patients with T1DM cope with following dietary recommendations such as, frequent glucose measurements and importantly their feelings about having diabetes. In recent years, new technologies have been developed to simplify CC. 7 *While various* commercial applications and bolus calculators may lead to better glucose control, such options are not available to all populations including lower socioeconomic status, older people and those with lower technological skills. In an attempt to overcome disparities in access and other barriers, the Diabetes Clinic of Soroka University Medical Center (SUMC) developed a simple, easy‐to‐use tool – Simple Carb Counting ‘SCC.’ This tool includes adjustments for those of differing educational backgrounds, cultures and cognitive abilities. It affords persons with diabetes continued enjoyment of their personal eating habits and food preferences through the successful adoption of the SCC method into their daily routines with ease and accuracy. The aim of this randomized controlled study was to test the feasibility and efficacy of SCC, compared to the regular CC method. Our *Hypothesis is* that this simple tool will enable all people with T1DM improve diabetes control. ## METHODS We performed an open‐label randomized controlled trial at the Diabetes Clinic of SUMC, a tertiary 1200‐bed hospital that treats a diverse population, including Bedouin and other Arabs, and Jews from the general, ultra‐Orthodox and Ethiopian sectors. Patients with T1DM were eligible for this study if they were (i) over 18 years old, (ii) treated either with an insulin pump or with multiple daily injections of insulin and (iii) had a hemoglobinA1c (A1C) level equal to, or >8.5. The study excluded pregnant or lactating women, patients with severe renal failure, heart failure or under active treatment for cancer. All participants signed an informed consent statement. The study was approved by the SUMC Helsinki Committee on 8 October 2015, approval number 0320‐15, and is registered at ClinicalTrials.gov, ID NCT04132128, and conducted from November 2015 to July 2017. Using simple randomization, we assigned participants to one of the two groups, (RCC) Regular carbohydrate counting, or (SCC) simplified individualized tool for carbohydrate counting. The selection process was purely random for every assignment made by Diabetes Clinic staff. All participants were allowed up to six instructional sessions with a registered dietitian who is also a diabetes care and education specialist during a period of about 6 months. All sessions were free of charge and lasted at least 60 min. During the first session, participants were introduced to the carbohydrate counting method to which they were assigned at randomization. Subsequent sessions were dedicated to reinforcing and practicing their method and changes in insulin dosage parameters were made if needed. All patients treated with insulin pumps were encouraged to use the bolus calculator. The primary end‐point was level of A1C 6 months after the intervention. Additional data parameters were collected including those of demographics, education and duration of diabetes. Weight was measured and blood studies were conducted before and after the intervention to determine baseline A1C and lipid levels. In order to identify depression and diabetes‐related distress, the participants were asked to complete the PAID5 questionnaire (Problem Areas in Diabetes Scale‐PAID) 8 at baseline and post‐intervention. ## Regular carbohydrate counting (CC) During the instructional sessions, the rationale for carbohydrate counting was explained. Commercial booklets containing a list of the carbohydrate content of foods were provided, and participants were introduced to websites or cellular phone applications designed to assist the public with determining the carbohydrate content of various foods. The participants were taught to calculate the amount of insulin needed using their personal I:C ratio, IS, correction factor, and the glucose target goals prescribed by the Diabetes Clinic team. Participants were encouraged to keep a food diary to assist them with carbohydrate counting. ## Simplified individualized carbohydrate counting (SCC) The SCC tool consisted of two tables written in the participant's native language and adjusted to the participant's specific requirements. Insulin doses were calculated by professional staff using personalized I:C ratios and IS. First Table: The first table, derived from patients' personal IS, listed the number of units that participants needed to administer in order to correct every pre‐meal blood glucose level so as to reach their target glucose. Second Table: The second table contained a list of food items derived from participants' personal eating habits, as recorded in their food diaries. The list consisted mainly of the most common foods they regularly consumed, the carbohydrate content of those foods and the number of insulin units needed, as calculated by the personal I:C ratio per usual portion of each food item. High carbohydrate content foods that participants included in their diet were listed, not for healthy nutrition education, but for purposes of facilitating carb counting. Foods that did not contain carbohydrates, such as protein or fat items, were also listed to ensure that the patient realized that these foods contained no carbs. Patients treated with insulin pumps received a personalized table containing the carbohydrates in their food list as grams to accurately calculate the amount of carbs entered to the calculator (Figures 1 and 2). **FIGURE 1:** *SCC carb counting chart for MDI users.* **FIGURE 2:** *SCC carb counting chart for insulin pump users.* At each instructional session, participants' tables were reviewed, personal dosing was tested, food items were added to, or deleted from participants' lists as warranted, and the logic of the method was reiterated for the purpose of reinforcing participants' understanding and compliance. ## Statistical analysis On the basis of the expected A1C difference of $0.8\%$ between groups with $90\%$ power providing an SD of $0.4\%$ and a significance level of $0.05\%$ we estimated that the sample size should consist at least 37 participants in each group. We used descriptive statistics to summarize the data, reporting results as means and standard deviations. Categorical variables were summarized as counts and percentages. Paired t‐test was used to examine within changes in A1C between baseline and follow‐up, and Student's t‐test was used to examine between differences as to the two intervention groups. We further analysed the data after stratifying the study population by sex, education level (above and below 12 years of school), age (above and below age 40) and duration of diabetes (more or <5 years since diagnosis). p values are shown. All analyses were performed with IBM SPSS. ## RESULTS In total, 107 men and women were recruited for the study, of whom 22 were excluded, as shown in Figure 3. Of the 85 people who were deemed eligible, $48.2\%$, $$n = 41$$, of participants (23 women, 18 men) were assigned to RCC method group, and $51.8\%$, $$n = 44$$, of participants, (17 women, 27 men) to the SCC method group. The mean age of participants was 43.1 (18–74). About $43\%$ of the RCC method group were treated with an insulin pump versus $36\%$ of the SCC group ($$p \leq .48$$). All patients monitored their glucose by self‐monitoring of blood glucose (SMBG) (Figure 1 and Table 1). **FIGURE 3:** *Flow diagram – Trial of standard carb counting versus simple individualized carb counting.* TABLE_PLACEHOLDER:TABLE 1 The participants were followed for a mean period of 6 months, during which all participants in the study improved their A1C level between baseline and follow‐up ($9.9\%$ = 13.2 mmol/L vs $8.6\%$ = 11.1 mmol/L, $$p \leq .001$$). Other biomarkers did not change from baseline in either of the groups. We stratified the study population by participant age (older and younger than age 40). Among older participants (mean age 55.2 (±9.4)), only those who used the SCC method exhibited significant improvement in A1C level, from baseline (9.6 (±1.3) = 12.7 mmol/L to 8.6 (±1.1) = 11.1 mmol/L, $$p \leq .002$$). While those using the RCC method showed some improvement, it did not reach statistical significance (9.7 (±1.3) = 12.9 mmol/L to 9.2 (±1.5) = 12.1 mmol/L, $$p \leq .09$$). ( Figure 4). **FIGURE 4:** *HbA1c results SCC vs RCC.* Among younger participants (mean age 29.6 (±6.3)), a significant improvement was found in both the RCC group (10.2 (±2.3) = 13.7 mmol/L to 8.3 (±1.4) = 10.6 mmol/L, $$p \leq .001$$) and SCC group (10.3 (±1.9) = 13.8 mmol/L to 8.3 (±1.1) = 10.6 mmol/L, $$p \leq .002$$). We stratified the study population by level of education (above and below 12 school years). Both higher and lower educated participants in the SCC group demonstrated a significant improvement in A1C level. The results for those with above and below 12 school years were −$1.4\%$ (±2.0) vs −$1.3\%$ (±1.9) respectively, $$p \leq .7.$$ We found a greater improvement in A1C levels when we compared participants with more recently diagnosed diabetes (<5 years from diagnosis, $$n = 13$$) to those whose diabetes was of longer duration ($$n = 72$$). ( −6.1 vs −0.7, span >−3.4 vs 0.9, $$p \leq .032$$) in both the RCC and SCC groups. We measured the degree of patient compliance by the number of instructional sessions attended during the study period. Compliance was fairly good for all participants, with a mean of 4.8 visits (out of the six allowed). No differences were found in compliance between participants in the RCC group and those in the SCC group, but women were more compliant than men, with a mean of 5.3 (1.3) visits, compared to 4.4 (1.9) for the men ($$p \leq .01$$). Compliance tended to be higher among participants with more than 12 years of education, compared to those with fewer years of education (4.5 visits (±1.9) vs 5.3 (±1.3), $$p \leq .08$$). The degree of compliance correlated with decreased A1C level post‐intervention. Sixty‐three participants (30 in RCC and 33 in SCC) completed the PAID5 questionnaire at baseline and post‐intervention. All participants reported increased satisfaction, as exhibited by a decreased PAID5 score (10.6 (±5.7) vs 9.5 (±5.7), $$p \leq .023$$). Based on the questionnaire responses, there were no differences in diabetes‐related emotional distress between participants in the RCC group and those in the SCC group. Yet, when the investigators stratified the data by sex, they found a significant improvement among women, who reported a decrease in diabetes‐related emotional distress from 11.5 to 9.9, $$p \leq .04$$, compared to men, 9.9 to 9.18, $$p \leq .27$$ (Table 2). **TABLE 2** | Unnamed: 0 | Baseline PAID Score (±SD) | Post‐intervention PAID Score (±SD) | p‐value | p‐value (SCC vs RCC) | | --- | --- | --- | --- | --- | | All participants (N = 63) | 10.6 (5.7) | 9.5 (5.7) | 0.02 | | | All SCC (N = 33) | 12.5 (5.4) | 11.1 (5.5) | 0.03 | | | All RCC (N = 30) | 9.3 (5.4) | 8.42 (5.6) | 0.09 | 0.69 | | All women (N = 30) | 11.5 (5.8) | 9.9 (5.6) | 0.02 | | | SCC women (N = 18) | 13.11 (5.5) | 11.28 (5.6) | 0.04 | | | RCC women (N = 12) | 9 (5.6) | 7.75 (5.1) | 0.15 | 0.7 | | All men (N = 33) | 9.9 (5.7) | 9.18 (5.9) | 0.13 | | | SCC men (N = 12) | 10.67 (6.2) | 9.83 (6.1) | 0.23 | | | RCC men (N = 21) | 9.48 (5.5) | 8.81 (5.9) | 0.21 | 0.9 | | Education >12 y (N = 31) | 8.65 (4.4) | 7.87 (4.7) | 0.12 | | | SCC education >12 y (N = 15) | 9.5 (4.9) | 8.3 (5.2) | 0.11 | | | RCC education >12 y (N = 16) | 7.9 (3.9) | 7.4 (4.4) | 0.32 | 0.6 | | Education <12 y (N = 32) | 12.6 (6.3) | 11.1 (6.2) | 0.02 | | | SCC education <12 y (N = 15) | 14.8 (5.5) | 13.1 (5.5) | 0.08 | | | RCC education <12 y (N = 17) | 10.6 (6.4) | 9.3 (6.5) | 0.08 | 0.77 | | Age > 40 (N = 33) | 9.1 (6.0) | 8.1 (5.3) | 0.07 | | | SCC age > 40 (N = 13) | 11.3 (6.6) | 9.6 (5.4) | 0.08 | | | RCC age > 40 (N = 20) | 7.75 (5.2) | 7.2 (5.2) | 0.2 | 0.4 | | Age < 40 (N = 30) | 12.3 (5.1) | 11 (5.9) | 0.04 | | | SCC age < 40 (N = 17) | 12.8 (5.2) | 12 (6.1) | 0.11 | | | RCC age < 40 (N = 13) | 11.7 (5.6) | 10.3 (5.8) | 0.11 | 0.9 | During the 6 month follow‐up, two patients from the RCC group were hospitalized with diabetic ketoacidosis (DKA). There were no hospitalizations or ER visits due to hypoglycaemia in both groups. ## DISCUSSION The findings showed that the SCC simple, individualized tool for carbohydrate counting was non‐inferior to the standard method of RCC. The SCC tool was more effective among participants aged 40 and older, while no differences were found when comparing participants above and below 12 school years. However, significant improvement in A1C level was observed in all participants. Participants in both RCC and SCC groups who were diagnosed with diabetes within the previous 5 years exhibited significantly greater improvement in A1C level, compared to participants with diabetes of longer duration. The SCC method presented in our study was developed in order to overcome difficulties and barriers that the diabetes clinic patients encountered in implementing CC, as described in several studies. Kawamura et al 5 tested the errors in carbohydrate content estimation among 37 paediatric patients, their parents, and their health care professionals, including physicians and dietitians. In all groups studied, they found overestimation of the carb content in foods with small amounts of carbs, and underestimation in foods with high carb content. While past experience in CC was important, some foods, such as rice, were hard to estimate even by experienced participants. In the qualitative study of Gürkan et al 9 investigators interviewed adolescents with diabetes, finding multiple barriers to effective treatment. Among the barriers were patients' negative feelings about having diabetes, as well as personal and environmental barriers. Personal barriers included lack of knowledge about the disease, trouble with glucose measurement, and difficulty following dietary recommendations. These findings were corroborated by Ahola et al 10 who found that many patients experienced difficulty managing their post‐prandial glucose, and were subject to a high percentage of time in a hyperglycaemic state. In this study, we present an option to overcome some of the barriers described in the above studies. Through personalization, flexibility, and a departure from a restrictive diet paradigm, SCC affords persons with diabetes an opportunity to continue eating their usual diet, including the customary dishes of their culture, and to go on with their life‐long social dining habits. The study showed superiority in reaching glycaemic control in participants older than age 40 who used the SCC method, compared to those in the RCC group. Treating older patients with T1DM is complicated by the combined challenges of insulin‐dependent diabetes, age‐related complications, and possible comorbidities, all of which negatively affect the older population's ability to self‐manage diabetes. 11 The SCC tool presented in the study simplifies the tasks needed for carbohydrate control, and consequently leads to better glycaemic control especially for older age group. Contrary to these findings, the study demonstrates that SCC was non‐inferior in people with various levels of education. In a cross‐sectional multicentre study of 768 subjects under age 18 with T1DM, Gesuita et al 12 found that only $28.1\%$ of participants reached target A1C values (<$7.5\%$). A strong correlation was found between higher socio‐economic status (SES), higher level of education and higher ability to follow ordinary CC. Significantly, Gesuita et al. highlighted the need for an accessible tool for non‐privileged populations. Recently diagnosed participants in both RCC and SCC groups showed the greatest benefit in improving glycaemic control. This may be explained in two possible ways, one psychological and one physiological. When first diagnosed, many patients are highly motivated to do well. In addition, in the early period after diagnosis, sometimes called the ‘honeymoon period,’ the pancreas seems to do better and secretes more insulin, although this phenomenon decreases with time and differs with each patient. All participants exhibited significant decreases in their PAID5 scores. Studies have shown 13, 14 that people with diabetes suffer from higher levels of psychological distress than does the healthy population. People with T1DM are three times more likely to develop depression than those without diabetes. 15 Moreover, psychological distress has been shown to be associated with hyperglycaemia, complications and higher mortality rate. 16, 17 Thus, there is a consensus that treating psychological stress and achieving psychological wellbeing ought to be one of the treatment goals of diabetes care. 18 A study by Zagarins et al 19 revealed a correlation between improvement of glycaemic control and alleviation of overall psychological stress, but not in depression. Our study corroborates these findings, and underscores the need for on‐going diabetes education, better understanding and treatment of diabetes and promoting a greater sense of self‐efficacy among patients in controlling the disease, as means of improving not only metabolic control, but also mental health. ## Limitations The intervention tool was introduced at a single diabetes clinic in a tertiary teaching hospital with one registered dietitian/diabetes care and education specialist. The method was not tested on paediatric patients, a population that has more difficulties with glycaemic control than others. A larger population of people from the lower socio‐economic and more culturally diverse backgrounds should be studied in order to corroborate the results and establish generalizability across populations. ## CONCLUSIONS AND IMPLICATIONS In large measure, the research into T1DM treatment is focused on advanced technologies including insulin pumps, continuous glucose monitoring, the artificial pancreas and various applications to support CC and diabetes management. Studies have shown 20 that although advanced applications are accessible and improve glycaemic control, only a small percentage of the population with T1DM chooses to use them. This may be explained by the human factor, that is, personal expectations, perceptions of the burden of new technologies, user‐friendliness and long‐term cost. The SCC tool tested in this study has the potential to apply to all diabetes patients, and in particular to those who are uncomfortable with the use of advanced technology, or who do not have access to such technology. In conclusion this study presents a simple, feasible, low‐tech tool that simplifies carbohydrate counting and which promotes and enables accurate insulin dosing in people with T1DM. Additional studies are needed to corroborate these findings. ## AUTHOR CONTRIBUTIONS Shulamit Witkow: Conceptualization (lead); data curation (lead); investigation (equal); methodology (equal); resources (lead). Idit F Liberty: Data curation (equal); formal analysis (supporting); investigation (equal); methodology (equal); project administration (equal); supervision (lead); validation (equal); writing – original draft (lead); writing – review and editing (lead). Irina Goloub: Data curation (supporting); resources (supporting). Malka Kaminsky: Conceptualization (supporting); data curation (equal); investigation (supporting); resources (supporting). Olga Otto: Data curation (equal); investigation (supporting). Yones Abu Rabia: Data curation (supporting); investigation (supporting); resources (equal). Ilana Harman Boehm: Conceptualization (equal); data curation (equal); investigation (equal); methodology (equal); resources (equal). Rachel Golan: *Formal analysis* (lead); writing – review and editing (supporting). ## CONFLICT OF INTEREST STATEMENT The Authors declare that there are no conflicts of interest. ## DATA AVAILABILITY STATEMENT Raw data were generated at the diabetes centre of SUMC. Derived data supporting the findings of this study are available from the corresponding author [IFL] on request
# A clinical trial about effects of prebiotic and probiotic supplementation on weight loss, psychological profile and metabolic parameters in obese subjects ## Abstract The supplementation with prebiotics and probiotics showed an improvement in lean mass, glycaemic profile, insulin resistance and uric acid more than diet alone. ### Introduction The management of obesity is difficult with many failures of lifestyle measures, hence the need to broaden the range of treatments prescribed. The aim of our work was to study the influence of pre and probiotics on weight loss psychological profile and metabolic parameters in obese patients. ### Methods It is a clinical trial involving 45 obese patients, recruited from the Obesity Unit of the National Institute of Nutrition between March and August 2022 divided into three groups: diet only (low‐carbohydrate and reduced energy diet), prebiotics (30 g of carob/day) and probiotics (one tablet containing Bifidobacterium longum, Lactobacillus helveticus, Lactococcus lactis, Streptococcus thermophilus/day). The three groups were matched for age, sex and BMI. Patients were seen after 1 month from the intervention. Anthropometric measures, biological parameters, dietary survey and psychological scores were performed. ### Results The average age of our population was 48.73 ± 7.7 years, with a female predominance. All three groups showed a significant decrease in weight, BMI and waist circumference with $p \leq .05.$ Only the prebiotic and probiotic group showed a significant decrease in fat mass ($$p \leq .001$$) and a significant increase in muscle strength with $$p \leq .008$$ and.004, but the differences were not significant between the three groups. Our results showed also a significant decrease in insulinemia and HOMA‐IR in the prebiotic group compared to the diet‐alone group ($$p \leq .03$$; $$p \leq .012$$) and the probiotic group showed a significant decrease in fasting blood glucose compared to the diet alone group ($$p \leq .02$$). A significant improvement in sleep quality was noted in the prebiotic group ($$p \leq .02$$), with a significant decrease in depression, anxiety and stress in all three groups. ### Conclusions The prescription of prebiotics and probiotics with the lifestyle measures seems interesting for the management of obesity especially if it is sarcopenic, in addition to the improvement of metabolic parameters and obesity‐related psychiatric disorders. ## INTRODUCTION Today, obesity is a global epidemy according to the World Health Organization, given the increase in its frequency in the world, and its responsibility in the appearance of several chronic pathologies, such as type 2 diabetes, hypertension, cardiovascular diseases, respiratory diseases, osteoarticular diseases, cancer and other pathologies. In 2021, the WHO announced that more than $40\%$ of men and women, or 2.2 billion people, are overweight and that an unbalanced diet was responsible for at least 8 million deaths per year. It is estimated that by 2025, 167 million people would be at risk of impaired health due to obesity. 1 In Tunisia, the prevalence of obesity was $26.2\%$ in 2016 according to the results of the “Tunisian Health Examination Survey‐2016”. 2 This disease is multifactorial, among the contributing factors of obesity are: a high‐fat diet, a sedentary lifestyle, but also the imbalance of the intestinal flora, “the gut microbiota” 3 which today represents the focus of several publications. Gut microbiota is defined by all the beneficial microorganisms that live and grow in the intestine. It is set up from birth and evolves according to different factors such as antibiotic treatments or diet (presence of fibres, richness of foods in pre and probiotics). Probiotics are living microorganisms, they are bacteria such as Lactobacilli, Bifidobacteria, Streptococci and many others or yeasts. They can be present naturally in our diet, especially in fermented foods such as certain yoghurts or fermented milks, whereas prebiotics represent substrates for these bacteria which allow them to ensure their growth and thus exercise their beneficial roles, they are also provided by our diet, from the dietary fibres present in vegetables and fruits, such as carob, chicory and others. Today, the microbiota is considered a therapeutic revolution, where researchers use its enrichment to prevent or treat certain diseases including obesity, 4 such as faecal transplantation, 5 but also the enrichment of the microbiota by prebiotics and probiotics to treat obesity. 6, 7 Hence, our interest in transposing these theoretical results to clinical practice. Aim: The objective of this interventional clinical trial was to evaluate the effects of a probiotic supplement containing Bifidobacteruim, Lactobacillus strains and a prebiotic supplement by carob on the changes in body composition and metabolic biomarkers in subjects with obesity (main purpose), we also checked the psychological profile of the population (quality of sleep, stress, anxiety and depression) as secondary purpose. ## MATERIALS AND METHODS We conducted a prospective interventional study at the obesity unit at the Zouhair El Kallel National Institute of Nutrition and Food Technology of Tunis, from March 2022 to August 2022. We included in our study obese patients (BMI ≥30 kg/m2) aged over 18 years. Patients with: renal failure, hypothyroidism, cancer, diabetic patients on insulin, on long‐term corticosteroid therapy, former patients of the obesity unit were not included. No participants dropped out of the study during the intervention period. Forty‐five patients were recruited on their first visit to the obesity unit (T0) and were randomly assigned to three groups matched for age, sex and BMI. All participants were enrolled in the weight loss program at the beginning of the study and followed a low‐carbohydrate, reduced‐energy intake eating plan provided by the same dietician. First group called “diet only”: on low‐calorie diet alone without any intervention (15 patients).Second group: 15 patients on the same diet plan but additionally received prebiotic supplementation (2 carob beans/day about 30 g) called “prebiotic group”. Third group: same diet with probiotic supplementation ($$n = 15$$). The probiotic component used in the study was one tablet containing an association of four microbiological strains which are: Bifidobacteruim longum, Lactobacillus helveticus, Lactococcus lactis, *Streptococcus thermophilus* (1 tablet (10.109 UFC/capsule)/day) called “probiotic group”. The probiotic supplement was produced by Pileje Labs. Patients were reassessed after 1 month (T1) and we track adherence by regular phone calls. All subjects gave their informed consent for participating in the study. The study was approved by the ethical committee of the national institute of nutrition of Tunis and the clinical trial was registered under number PACTR202210705998795 in the Pan African Clinical Trial Registry. Body mass index (BMI) was calculated using body weight and height measured with bare feet and in minimal clothing according to the World Health Organization definition and classification. 8 Body composition parameters (body fat mass and percentage and body lean mass) were acquired before and after 1 month of intervention by impedance meter TANITA BC418MA. We took the waist circumference of the patients. Muscle strength was measured by the handgrip. Sarcopenia was defined by muscle strength lower than 27 kg for men and 16 for women. A biological assessment was carried out at T0 and T1 including: fasting glycaemia, HbA1C, Cholesterol, triglycerides, HDL, calculated LDL Friedwald formula, 9 insulinemia, calculated HOMA‐IR (HOMA‐IR = (insulin (mU/l) x glycaemia (mmol/l))/22.5), AST, ALT, GGT, creatinine and calculated eGFR. Blood glucose results were interpreted according to American diabetes association guidelines. 10 We looked at the physical examination for blood pressure and other complications of obesity such as hernia, sleep apnoea syndrome, osteoarthritis and NASH and if necessary we completed with the necessary radiological examinations. All patients benefited from an interview including food survey, stress questionnaire (Cunji), sleep questionnaire (Epworth), symptoms of depression and anxiety (HADS). For the evaluation of stress, we used the brief stress evaluation scale, this is the scale of Cungi 1997. 11 This scale is made up of 11 items, and for each the response is from 1 to 6. The evaluation of the quality of sleep was carried out using the Epworth Sleepiness Scale, 12 this questionnaire assesses the level of daytime sleepiness of the patient. It is composed of eight items, and for each situation, the patient must select an answer from (0 to 3). The interpretation is as follows: A total of less than 10 suggests that there is no excessive daytime sleepiness. A total of 10 and above suggests excessive daytime sleepiness. To assess the depressive state of the patients, we used the “HAD” scale (Hospital Anxiety and Depression Scale). 13 *This is* a structured questionnaire of 14 items. This questionnaire consists of two subscales, each having 7 items, one for anxiety, the other for depression. Each item is rated on a 4‐point scale, that is from 0 to 3, evaluating the intensity of symptoms over the past week. The scores therefore range from 0 to 21 and the highest scores correspond to the presence of more severe symptoms. The addition of the scores obtained for each item allows the following interpretation: Less than 7 points: no symptoms of depression. Eight to 10 points: doubtful symptomatology. Eleven and over: certain symptomatology. ## Statistical analysis The three‐variable ANOVA with Student's t test for paired series were used for group comparison of the body composition and metabolic parameters at T1 and T0 (SPSS Statistics, v. 25). The results were expressed as mean ± SD, and mean differences were considered significant at $p \leq .05.$ ## RESULTS The average age of our population was 48.73 ± 7.7 years with extremes ranging from 33 to 63 years. Half of the population ($51\%$) was over 50 years old. The majority of participants were female $93.3\%$ ($$n = 42$$) against $6.7\%$ ($$n = 3$$) of men. Past medical history, complications and lab test results are present in Table 1. **TABLE 1** | Unnamed: 0 | Diet only (%) | Prebiotic (%) | Probiotic (%) | p | | --- | --- | --- | --- | --- | | Past medical history | Past medical history | Past medical history | Past medical history | Past medical history | | Diabetes | 6.7 | 26.7 | 13.3 | .3 | | Hypertension | 6.7 | 20 | 33.3 | .2 | | Dyslipidaemia | 6.7 | 26.7 | 13.3 | .3 | | Active smokers (%) | 6.7 | 13.3 | 6.7 | .7 | | Osteoarthritis (%) | 33.3 | 20 | 26.7 | .6 | | Sleep apnoea syndrome (%) | 26.7 | 66.7 | 33.3 | .07 | | Hernia (%) | 13.3 | 6.7 | 13.3 | .6 | | NASH (%) | 24 | 30 | 24 | .42 | | Diabetes (%) | 13.3 | 53.3 | 20 | .06 | | Prediabetes (%) | 13.3 | 6.7 | 33.3 | .06 | | Insulin resistance (%) | 6.7 | 0 | 7.1 | .65 | | High TG levels (%) | 46.7 | 66.7 | 26.7 | .18 | | Low HDL levels(%) | 46.7 | 46.7 | 33.3 | .27 | | High LDL levels(%) | 26.7 | 33.3 | 33.3 | .4 | Blood pressure values are comparable in the three groups. Our three groups were matched for BMI. There was no statistically significant difference for anthropometric measurements (weight, height, IMC, fat mass, muscle mass and waist circumference) between the three groups. In addition, the majority of patients in all three groups had normal muscle strength. Sarcopenia at T0 was noted in $20\%$ in the diet‐only group, $6.7\%$ in the prebiotic group and $13.3\%$ in the probiotic group. In each group, $93.3\%$ of patients were sedentary. At recruitment, we performed a frequency questionnaire consumption of foods rich in prebiotics and probiotics such as coffee, tea, garlic, onion, fermented foods, cacao, yoghurts and fruits. There were no differences between groups. No patient reported alcohol consumption and none had a regular consumption of carob. Most of the patients of the three groups had a high level of anxiety, depression and stress but without statistically significant difference. The result of the intervention after 1 month are in Table 2. **TABLE 2** | Unnamed: 0 | Diet only | Diet only.1 | Diet only.2 | Prebiotic | Prebiotic.1 | Prebiotic.2 | Probiotic | Probiotic.1 | Probiotic.2 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | T0 | T1 | p | T0 | T1 | p | T0 | T1 | p | | Weight (kg) | 103.7 | 101.2 | .001 | 103.5 | 101.6 | .003 | 106.09 | 104.4 | .02 | | Fat mass (kg) | 46.7 | 44.8 | .07 | 47.3 | 44.3 | .001 | 47.5 | 45.01 | .001 | | Lean mass (kg) | 54.06 | 53.5 | .3 | 55.1 | 55.6 | .2 | 55.5 | 56.4 | .08 | | Waist circumference (cm) | 119 | 117.3 | .01 | 124 | 120 | .03 | 122 | 119 | .001 | | Muscle strength (kg) | 24.4 | 24.3 | .8 | 27.4 | 28.8 | .008 | 24.8 | 26.5 | .004 | | Systolic blood pressure (mmHg) | 13 | 12.8 | .6 | 13 | 12.2 | .03 | 13.3 | 12.6 | .01 | | Fasting glucose (mmol/l) | 5.3 | 5.5 | .27 | 7.5 | 5.1 | .2 | 5.66 | 5.6 | .6 | | HbA1c (%) | 5.6 | 5.5 | .03 | 6.6 | 6.3 | .3 | 5.8 | 5.6 | .003 | | Insulin (μUI/l) | 18.4 | 15.2 | .07 | 23.8 | 14.5 | .002 | 17.5 | 13.7 | .005 | | HOMA‐IR | 4.3 | 3.8 | .2 | 9.1 | 3.8 | .009 | 4.5 | 3.4 | .009 | | Cholesterol (mmol/l) | 5.2 | 4.8 | .03 | 5.3 | 4.9 | .005 | 5.2 | 4.6 | .08 | | HDL (mmol/l) | 1.6 | 1.05 | .9 | 1.07 | 1.08 | .8 | 1.2 | 1.22 | .7 | | LDL (mmol/l) | 3.2 | 2.9 | .05 | 3.3 | 2.9 | .003 | 3.2 | 2.8 | .004 | | Triglycerides (mmol) | 1.7 | 1.6 | .4 | 1.9 | 1.4 | .001 | 1.6 | 1.4 | .03 | | ALAT (UI/l) | 21.2 | 18.4 | .03 | 21.3 | 20.8 | .7 | 21.1 | 17.8 | .01 | | Uric acid | 277.2 | 289.4 | .4 | 346.3 | 365.3 | .3 | 295.7 | 284.2 | .1 | | Epworth | 9.8 | 8.6 | .06 | 8.7 | 7 | .02 | 10.2 | 7.9 | .03 | | Anxiety | 13.3 | 11.2 | .02 | 11.4 | 9.4 | .01 | 13.3 | 11.6 | .06 | | depression | 12.4 | 9.9 | .001 | 11.2 | 8.06 | .01 | 11.5 | 8.9 | .001 | | Stress | 40.1 | 33.4 | .01 | 36.2 | 31.3 | .001 | 35.6 | 29.3 | .002 | The results of anthropometric measurements after the intervention in the three groups showed a statistically significant decrease in weight, BMI and WC, but muscle strength has increase only with pre and probiotics. The population has significantly decreased energy and macronutrient (protein, carbohydrate and lipid) intake, with a significant decrease in sugar and sodium intake. A significant increase in fibre intake was noted in the diet and prebiotic group but not in the probiotic group. The quality of sleep was not improved by the diet only and probiotics did not enhance anxiety. Taking probiotics was associated with the occurrence of diarrhoea in $20\%$ of cases ($p \leq .001$). Then we compared the diet alone versus prebiotics group for all the parameters listed in Table 3. The difference was not significant. Then it was the diet alone group versus probiotics and finally prebiotics versus probiotics. **TABLE 3** | Unnamed: 0 | Mean difference (T0–T1) | Mean difference (T0–T1).1 | p | Mean difference (T0–T1).2 | Mean difference (T0–T1).3 | p.1 | Mean difference (T0–T1).4 | Mean difference (T0–T1).5 | p.2 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | Diet only | Prebiotics | p | Diet only | Probiotics | p | Prebiotics | Probiotics | p | | Fasting glycaemia (mmol/l) | −0.18 | 1.6 | .016 | −0.18 | 0.06 | .02 | 1.6 | 0.06 | .3 | | HbA1c (%) | 0.12 a | 0.2 | .17 | 0.12 a | 0.18 a | .3 | 0.2 | 0.18 a | .4 | | Insulin (μUI/l) | 2.9 | 9.3 a | .03 | 2.9 | 3.8 a | .3 | 9.3 a | 3.8 a | .2 | | HOMA‐IR | 0.5 | 5.3 a | .012 | 0.5 | 1.04 a | .1 | 5.3 | 1.04 a | .2 | | Uric acid | −12.2 | −19.07 | .75 | −12.2 | 11.6 | .001 | −19.07 | 11.6 | .02 | Our conclusion is that the different therapeutic means are equal for the dietary survey, the different scores (stress, sleep, anxiety and depression). The influence of the three means on weight loss is equivalent even if it is the diet alone group which reduced the weight more except for the lean mass which was clearly increased by probiotics compared to diet ($$p \leq .05$$). On the other hand, significant differences between the three means were found in the results of the blood tests represented in Table 3. Prebiotics and probiotics were better than diet for the reduction of fasting glycemia and insulin resistance but probiotics did not lower uric acid as much as others. ## DISCUSSION This study was an interventional clinical trial designed to examine the effects of a combination of probiotic bacteria B. longum, L. helveticus, L. lactis, S. thermophilus and a prebiotic supplement by 30 g/day of carob on changes in body composition, metabolic biomarkers and psychological profile in obese human subjects enrolled on a weight loss program. The weight loss program was a low‐carbohydrate, energy‐restricted eating plan. The study has confirmed that a low‐carbohydrate, restricted‐energy diet can be effectively used for weight loss in obese individuals. Our work has some strength—to our knowledge in Tunisia no one studied the association between prebiotics or probiotics and obesity, the only Tunisian study that has worked on the microbiota has studied the imbalance of the microbiota in diabetic patients. 14 The use of carob as a prebiotic for weight loss is an innovation that fits into abandoned Tunisian habits. Carob is available at a nominal cost less than some fruits and vegetables. Our study focused on several parameters apart from anthropometry, such as biology and other assessment tests such as the Epworth score, the HAD and the Cungi stress score but it has some limitations like the small number of patients for each group and microbiological analysis for the gut microbia was not performed. In addition, the study was conducted over a month; perhaps a longer duration of intervention would show other results. Many studies have shown the effect of pre or probiotic on the weight loss. Sergeev et al., 15 compared the effect of symbiotic supplementation (prebiotic and probiotic) on the body composition of obese patients against a placebo group which received only a low‐calorie diet, they found a significant decrease in weight in both groups. However, the study of Hiel et al., 16 using inulin as prebiotic compared to placebo, found a significant reduction in weight in the prebiotic group. This difference may be due to the difference in the prescribed diet and also to the difference in the number of patients. In addition, the study by Stenman et al., 17 which is a study that compared the effect of prebiotic alone, probiotic alone and prebiotic+probiotic to a placebo group, found that only the probiotic alone group presented weight loss compared to the other groups. Some other studies did not found a difference between groups. 18, 19 This difference may be due to the difference in the diet given and also the type of prebiotic and probiotic used. Similarly, Rodriguez in their studies showed that there were responders and non‐responders in obese patients treated with prebiotics depending on the initial species of intestinal flora present in the host during the intervention. 20 Indeed, the microbiota intervenes in the regulation of energy expenditure by acting on specific hormones, thanks to a bidirectional signalling between the brain and the intestine, the gut microbiota regulates appetite and energy expenditure then follows a weight regulation. 21 Prebiotics act on the microbiota by increasing the production of short‐chain fatty acids, which in turn causes a cascade of modifications leading to weight reduction and improved metabolic parameters. 22 Our study showed a significant increase in muscle strength in both the prebiotic group and the probiotic group. As well as Zahao and Kang in their studies. 23, 24 Alteration of the gut microbiota has been shown to directly affect muscle strength. Probiotics, prebiotics and short‐chain fatty acids are potential new therapies to improve lean mass and physical performance. Strains of Lactobacillus and Bifidobacterium (present in Lactibiane*) can restore age‐related muscle loss. The pathways by which microbiota influence muscle are diverse and complex. 25 Our results showed a beneficial effect of prebiotics and probiotics on carbohydrate metabolism. These results were in agreement with the study conducted by Miller et al., 26 which found that the symbiotic yoghurt protected mice against diabetes by significantly improving fasting blood glucose levels versus unenriched control yoghurt. In addition, a preparation rich in fibre and lactulose as prebiotics used in an old clinical study, 27 showed a decrease in blood sugar in 10 obese patients. Oral supplementation with prebiotics and probiotics acts on the regulation of glycaemia, the mechanism of action consists in reducing the secretion of inflammatory markers such as IFN‐γ and IL‐1β by increasing the production of IL‐10 anti‐inflammatory. In addition, probiotics stimulate the secretion of the neurotransmitter GABA which decreases the production of glucagon and stimulates the production of insulin. 28, 29 Our study showed a decrease in uric acid in the probiotic group with a significant difference compared to the diet‐alone group and the prebiotic group. To study the effect of probiotics on uric acid, there was first the pilot study of Garcia‐Arroyo carried out in 2018 on six rats which affirmed this hypothesis. 30 Then other studies followed with the same results. 31, 32 The decrease in energy intake found after prebiotic and probiotic supplementation is explained by the stimulation of leptin secretion and the decrease in ghrelin secretion, which increase satiety and consequently decrease in intake. In addition, the reduction of microbiome lipopolysaccharides by pre and probiotics promotes reduced appetite by increasing satiety. 33 A decrease in Epworth score was found in all three groups. Our study was consistent with others. 34, 35 However, the study by Buigues et al. 36 did not show conclusive results of prebiotics on sleep quality. Following the fermentation of fibres from prebiotics by microbiota, there will be production of butyrate which improves sleep quality 37 but the mechanisms involved are more complex than that. 38 The three means were comparable in their influence on depression and anxiety. Other studies proved a good improvement of these symptoms when patients took probiotic. 39, 40 It has been shown that probiotics stimulate the production of inhibitory neurotransmitters such as the neurotransmitter GABA, which causes a reduction in anxiety and depression. 41 On the other hand, the imbalance of the gut microbiota is responsible for the occurrence of depression by the decrease in the production of some lipid metabolites (endogenous cannabinoids). 42 As for the stress, prebiotics and probiotics increase the production of serotonin, which is a molecule involved in mood regulation, by stimulating the synthesis of tryptophan 43 which improves the symptoms of stress. ## CONCLUSION The imbalance in the functioning of the body is due on the one hand to the imbalance of the gut microbiota because of obesity which alters the beneficial microorganisms and on the other hand this alteration which further promotes obesity by several mechanisms and signalling pathways. 44 The intestinal microbiota, as it is called the second brain, intervenes in the regulation of the functioning of the organism, which has been demonstrated by several studies. Hence the importance of modulating the gut microbiota with prebiotics and probiotics to treat obesity and improve related metabolic parameters. In the light of this study and other studies, it is advisable to take certain measures to treat obesity: *Follow a* diet balanced in energy intake to prevent the alteration of the gut microbiota. Enrich the diet with foods rich in prebiotics and probiotics, either to prevent the onset of obesity or to treat it. Treatment with pre and probiotics should be considered in case of sarcopenic obesity. Adopt treatment with prebiotics and probiotics, especially if obesity is linked to a glycaemic disorder. Prescription of prebiotics and probiotics can Improve the quality of sleep, anxiety and stress in some cases. ## AUTHOR CONTRIBUTIONS Nadia Ben Amor: Visualization (equal). Faten Mahjoub: Visualization (equal). Olfa Berriche: Visualization (equal). Chaima El Ghali: Investigation (equal). Amel Gamoudi: Project administration (equal). Henda Jamoussi: Writing – review and editing (equal). ## FUNDING INFORMATION This research received no funding. ## CONFLICT OF INTEREST The authors declare no conflict of interest. ## DATA AVAILABILITY STATEMENT Data sharing is not applicable to this article as no new data were created or analyzed in this study.
# Examining dyslipidaemia, metabolic syndrome and liver enzyme levels in patients with prediabetes and type 2 diabetes in population from Hoveyzeh cohort study: A case–control study in Iran ## Abstract Our results indicated a significant increase in liver enzymes, lipid profile and MetS status in both pre‐diabetic and T2MD subjects, with the differences being more pronounced in diabetic individuals. ### Introduction Type 2 diabetes mellitus (T2DM) is among the world's top 10 leading causes of death. Additionally, prediabetes is a major risk factor for diabetes. Identifying diabetes co‐occurring disorders can aid in reducing adverse effects and facilitating early detection. In this study, we evaluated dyslipidaemia, metabolic syndrome (MetS), and liver enzyme levels in pre‐diabetic and T2DM patients in the Persian cohort compared to a control group. ### Materials and Methods In this cross‐sectional study, 2259 pre‐diabetes, 1664 T2DM and 5840 controls (35–70 years) who were selected from the Hoveyzeh cohort centre were examined. Body mass index, blood pressure, fasting blood glucose (FBG), total cholesterol (TC), high‐density lipoprotein cholesterol (HDL‐C), triglyceride (TG) and liver enzymes: γ‐glutamyltransferase (GGT), alanine aminotransferase (ALT) and aspartate aminotransferase (AST) were determined using the standard protocols. MetS subjects were also identified based on the National Cholesterol Education Program guidelines. ### Results Prediabetes and T2MD were closely correlated with the lipid profile, MetS, and liver enzymes (ALT, GGT, ALT/AST). MetS increases the risk of T2DM by 12.45 [$95\%$ CI: 10.88–14.24] fold, while an increase in ALT/AST ratio increases the risk of T2DM by 3.68 [$95\%$ CI: 3.159–4.154] fold. ROC curve analysis also revealed the diagnostic roles of GGT, ALT, AST and the ALT/AST ratio among pre‐diabetics, diabetics and the control group. The GGT level corresponds to the highest AUCs (0.685) with the highest sensitivity ($70.25\%$). ### Conclusions Our results indicated a significant increase in liver enzymes, lipid profile and MetS status in both pre‐diabetic and T2MD subjects, with the differences being more pronounced in diabetic individuals. Consequently, on the one hand, these variables may be considered predictive risk factors for diabetes, and on the other hand, they may be used as diagnostic factors. In order to confirm the clinical applications of these variables, additional research is required. ## INTRODUCTION Diabetes mellitus, as a metabolic disorder, 1 is one of the most prevalent global public health issues 2 and contributes to a rise in morbidity and mortality. 3 According to estimates from the International Diabetes Federation (IDF), 1 in 11 individuals between the ages of 20 and 79 had type 2 diabetes mellitus (T2DM) in 2015, 4 which could reach 629 million by 2045. 2 *Diabetes is* hyperglycaemia resulting from insulin deficiency, insulin resistance or both. 1, 3 *Prediabetes is* a major diabetes risk factor. 2 *It is* a hyperglycaemic condition marked by impaired fasting glucose (IFG), impaired glucose tolerance (IGT) or glycated haemoglobin (A1C) of $6.0\%$–$6.4\%$, or a combination of these. 1, 2 *Both dyslipidaemia* and hypertension are significant risk factors for T2DM. According to the American Diabetes Association, patients with T2DM who have dysregulated levels of lipids such as total cholesterol, triglycerides, very‐low‐density lipoprotein (VLDL), low‐density lipoprotein (LDL) and high‐density lipoprotein (HDL) are diagnosed with diabetic dyslipidaemia. Alternatively, lipid markers may be a useful predictor of risk in diabetic patients. 5 In addition, prediabetes and T2DM are common metabolic syndrome (MetS) manifestations. 1 Some studies indicate that individuals with metabolic syndrome are four times more likely to develop T2DM. 6 MetS are characterized by hypertriglyceridemia, low HDL cholesterol, abdominal obesity or a high BMI ratio, glucose intolerance or insulin resistance, hypertension and microalbuminuria. 7 Insulin resistance syndrome may result in hepatic dysfunction, resulting in T2DM. 6 Therefore, patients with advanced liver disease have a higher incidence of diabetes than the general population. 8 Conversely, releasing free fatty acids (FFAs) due to T2DM decreases hepatic mitochondrial function. In turn, this causes further triglyceride storage in the hepatocyte and, ultimately, liver damage. 8 Serum levels of liver enzymes, such as alanine aminotransferase (ALT), aspartate aminotransferase (AST), and to a lesser extent γ‐glutamyltransferase (GGT), are frequently used as indicators of liver damage. 9 In the past decade, several studies have linked serum concentrations of these enzymes to multiple metabolic syndrome symptoms, including hepatic insulin resistance, T2DM and dyslipidaemia. 9, 10, 11 Since then, little research has been conducted on the relationship between dyslipidaemia, metabolic syndrome and liver enzyme levels in pre‐diabetic and T2DM patients. In order to determine the relationship between these risk factors and the development of prediabetes and diabetes in the adult population of Hoveyzeh cohort centre, this study was conducted on three groups: healthy, pre‐diabetic and T2DM. ## MATERIALS AND METHODS We conducted a cross‐sectional study in men and women aged 35–70 who underwent a comprehensive health screening exam at the Hoveyzeh cohort centre for Prospective Epidemiological Research Studies in Iran (PERSIAN), a region in Iran's Southwest Khuzestan province, between 1 May 2016 and 31 August 2018. Therefore, 10,009 people were recruited in Hoveyzeh cohort centre. Patients with any of the following conditions at baseline were excluded from the study: a history of cancer, renal failure, known liver disease, ALT>3 times normal, alcohol consumption, recent (1 year) MI, acute coronary syndrome, stroke and weight loss of more than 5 kg in a month as well as microvascular complications. Finally, 9763 of 10,009 cases had the criteria for this study. All participants then completed questionnaires, including demographic information, cigarette smoking, opium use, consumable drugs, disease history and physical activity. First, blood samples for analysis were obtained from the antecubital vein of patients and subjects who had fasted for 10 to 12 h. In the central laboratory of the Hoveyzeh cohort centre, all biochemical parameters were measured using standardized protocols on automated equipment. Fasting serum glucose was assayed using the hexokinase/glucose‐6‐phosphate dehydrogenase method. Diabetes was defined as FBS levels ≥126 mg/dl or receiving anti‐diabetic drugs or self‐reported diagnosis of diabetes. Standard enzymatic colorimetric techniques were used to measure serum total cholesterol (TC), triacylglycerol (TG) and high‐density lipoprotein cholesterol (HDL‐C) levels. The level of low‐density lipoprotein cholesterol (LDL‐C) was determined using the Friedewald et al. formula (LDL‐C = TC ‐ HDL ‐ VLDL cholesterol). 9 The levels of AST, ALT and GGT were determined using the International Federation of Clinical Chemistry's method. All these analyses were done using commercial kits (Pars Azmon Inc.). MetS is defined by three or more of the following National Cholesterol Education Program criteria: high TG (≥150 mg/dl); low HDL‐C (≤40 mg/dl) for men and <50 for women; high fasting blood sugar (≥100 mg/dl) or known type 2 diabetes; hypertension (at least $\frac{135}{85}$ mmHg or receiving antihypertensive medication); and a waist circumference greater than 102 cm for men and 88 for women. 6, 12, 13 ## Statistical analysis The statistical analyses were conducted using SPSS (v. 15.0). For quantitative variables, data were presented as mean ± standard deviation; for qualitative variables, data are expressed as frequency (number (%)), The normality of data was determined using the Kolmogorov–Smirnov test, and the chi‐square test was used to determine the association between qualitative variables. Differences between the two groups were calculated by Mann–Whitney tests for skewed data. In addition, the Kruskal–Wallis test was used to compare variables in three groups. Moreover, logistic regression analysis was employed to calculate studied risk factors for prediabetes and diabetes vs. control group. Then, multivariable model was performed for adjusting of age, gender and BMI. Receiver operating characteristic (ROC) curve analysis was used to determine the prognostic relationship of liver enzymes and lipid profile in prediabetes and diabetes. All p‐values were two‐tailed, and $p \leq .05$ were considered statistically significant. ## Characteristics of the study participants according to FBS tertiles The final database contained 9763 subjects (3809 males and 5954 females); subjects were divided into three groups based on FBS levels. Table 1 illustrates the characteristics of three distinct groups. T2DM prevalence was $17.0\%$ ($18.1\%$ in males and $16.4\%$ in females), prediabetes prevalence was $23.1\%$ ($21.0\%$ in males and $24.5\%$ in females), and control prevalence was $59.8\%$ ($60.9\%$ in males, $59.1\%$ in females). Participants with prediabetes and T2DM were older and had a higher BMI, waist circumference, diastolic blood pressure (DBP) and systolic blood pressure (SBP) than control subjects. **TABLE 1** | Variables | Fasting glucose (mg/dl) | Fasting glucose (mg/dl).1 | Fasting glucose (mg/dl).2 | Fasting glucose (mg/dl).3 | | --- | --- | --- | --- | --- | | Variables | FBS ≤100 5840 (59.82%) | FBS: 100–125 2259 (23.14%) | FBS≥126 1664 (17.04%) | p‐Value | | Anthropometrics | Anthropometrics | Anthropometrics | Anthropometrics | Anthropometrics | | Gender | Gender | Gender | Gender | Gender | | Male | 2321 (60.9%) A | 799 (21.0%) B | 689 (18.1%) C | <.0001** | | Female | 3519 (59.1%) A | 1460 (24.5%) B | 975 (16.4%) C | <.0001** | | Age (year) | 47.03 ± 8.79 A | 50.53 ± 9.27 B | 52.81 ± 8.89 C | <.0001* | | Waist Circumference (cm) | 97.62 ± 11.83 A | 102.45 ± 12.09 B | 103.15 ± 11.46 B | <.0001* | | BMI (kg/m2) | 28.14 ± 5.15 A | 30.03 ± 5.52 B | 29.63 ± 5.23 C | <.0001* | | Diastolic blood pressure (mmHg) | 70.32 ± 10.97 A | 72.60 ± 11.38 B | 73.16 ± 11.51 B | <.0001* | | Systolic blood pressure (mmHg) | 110.46 ± 16.95A | 115.49 ± 18.90 B | 118.27 ± 20.19 C | <.0001* | | Metabolic syndrome | Metabolic syndrome | Metabolic syndrome | Metabolic syndrome | Metabolic syndrome | | No | 4409 (75.5%) A | 630 (27.9%) B | 330 (19.8%) C | <.0001** | | Yes | 1413 (24.5%) A | 1629 (72.1) B | 1334 (80.2%) C | <.0001** | | Biochemicals | Biochemicals | Biochemicals | Biochemicals | Biochemicals | | FBS (mg/dl) | 88.97 ± 6.54 A | 108.25 ± 6.96 B | 201.48 ± 67.98 C | <.0001* | | LDL (mg/dl) | 105.62 ± 31.17A | 109.51 ± 33.83A | 106.86 ± 37.33B | <.0001* | | TG (mg/dl) | 147.6 ± 84.2 A | 170.06 ± 107.3 B | 202.02 ± 135.4 C | <.0001* | | Total Cholesterol (mg/dl) | 185.76 ± 37.1 A | 193.97 ± 40.8 B | 196.34 ± 47.9 B | <.0001* | | HDL (mg/dl) | 50.68 ± 12.24 A | 50.34 ± 11.75 A | 49.21 ± 11.61 B | <.0001* | | Hepatic enzymes | Hepatic enzymes | Hepatic enzymes | Hepatic enzymes | Hepatic enzymes | | AST (units/L) | 18.06 ± 7.62 A | 19.30 ± 9.19 A | 17.36 ± 9.05 B | <.0001* | | ALT (units/L) | 20.52 ± 13.72 A | 22.03 ± 14.88 B | 22.25 ± 13.35 C | <.0001* | | GGT (units/L) | 24.14 ± 16.61 A | 27.05 ± 17.68 B | 34.73 ± 34.01 C | <.0001* | | ALT/AST | 1.06 ± 0.384 A | 1.10 ± 0.380 B | 1.28 ± 0.514 C | <.0001* | In prediabetes and T2DM, biochemical variables, including TG, were significantly higher than in the control group. Compared to the control group, prediabetes and diabetes had significantly higher mean total cholesterol levels, whereas there was no significant difference between prediabetes and diabetes. In addition, the mean LDL in diabetes and normal groups was significantly higher than in the prediabetes group, but there was no significant difference between diabetes and normal groups. In contrast, the HDL level was significantly lower in T2DM compared to prediabetes and the control group, whereas there was no significant difference between prediabetes and the control group. Those who developed prediabetes and T2DM had significantly higher levels of hepatic enzymes, including GTT and ALT, compared to the control group. In contrast, the mean AST was significantly lower in T2DM than in prediabetes and the control group, and there was no significant difference between prediabetes and the control group (Table 1). ## ROC curve analysis Receiver operating characteristic curve analysis revealed the significance of GGT, ALT, AST and the ALT/AST ratio in identifying prediabetes or diabetes (Table 2, Figure 1). The ROC curve analysis is presented in Table 2. Roc curve analysis of GTT, ALT, AST and ALT/AST in diabetes vs. control group was as follows: AUC = 0.685; ($95\%$ CI: 0.673–0.694; $p \leq .0001$; Cut‐off value: >21.36; Sensitivity: $70.25\%$; Specificity: $57.76\%$), AUC = 0.564; ($95\%$ CI: 0.553–0.575; $p \leq .0001$; Cut‐off value: >14; Sensitivity: $70.85\%$; Specificity: $39.88\%$), AUC = 0.588; ($95\%$ CI: 0.577–0.600; $p \leq .0001$; Cut‐off value: <14; Sensitivity: $43.33\%$; Specificity: $71.64\%$), AUC = 0.669; ($95\%$ CI: 0.658–0.679; $p \leq .0001$; Cut‐off value: >1.06; Sensitivity: $68.87\%$; Specificity: $57.29\%$), respectively. Similar results were also observed in the case of prediabetes vs. control group, including: AUC = 0.573; ($95\%$ CI: 0.562–0.583; $p \leq .0001$; Cut‐off value: >20.33; Sensitivity: $57.10\%$; Specificity: $53.89\%$), AUC = 0.537; ($95\%$ CI: 0.526–0.548; $p \leq .0001$; Cut‐off value:>15; Sensitivity: $60.69\%$; Specificity: $45.26\%$), AUC = 0.513; ($95\%$ CI: 0.526–0.548; $$p \leq .082$$; Cut‐off value: >25; Sensitivity: $14.52\%$; Specificity: $88.48\%$), AUC = 0.542; ($95\%$ CI: 0.531–0.553; $p \leq .0001$; Cut‐off value: >0.95; Sensitivity: $61.66\%$; Specificity: $46.25\%$). ## Logistic regression analysis According to logistic regression analysis, some liver enzymes, lipid profiles and metabolic syndrome were associated with an increased odds of developing prediabetes or diabetes (Table 3). The estimated ORs for metabolic syndrome in the prediabetes and diabetes groups were 7.966 ($95\%$ CI: 7.139–8.889; $p \leq .0001$) and 12.45 ($95\%$ CI: 10.88–14.24; $p \leq .0001$), respectively. In the case of AST, however, the odds ratio (0.976) indicated a reduction in diabetes odds ($95\%$ CI: 0.968–0.984; $p \leq .0001$). On the contrary, the ALT/AST ratio increases the odds of prediabetes and diabetes development by 1.347 ($95\%$ CI: 0.968–0.984; $p \leq .0001$) and 3.623 ($95\%$ CI: 3.159–4.154; $p \leq .0001$, respectively). After the adjustment for age, sex and BMI, there was almost no difference with the results obtained from univariate analysis. However, both analyses display significantly positive relationships between ALT/AST ratio and metabolic syndrome with prediabetes and diabetes. **TABLE 3** | Prediabetes | Prediabetes.1 | Prediabetes.2 | Prediabetes.3 | Diabetes | Diabetes.1 | Diabetes.2 | | --- | --- | --- | --- | --- | --- | --- | | Variable | Odds Ratios | 95% CI | p‐Value | Odds Ratios | 95% CI | p‐Value | | model 1 a | model 1 a | model 1 a | model 1 a | model 1 a | model 1 a | model 1 a | | GGT | 1.009 | 1.006–1.012 | <.0001 | 1.024 | 1.021–1.027 | <.0001 | | AST | 1.010 | 1.004–1.016 | <.0001 | 0.976 | 0.968–0.984 | <.0001 | | ALT | 1.007 | 1.004–1.010 | <.0001 | 1.008 | 1.004–1.012 | <.0001 | | ALT/AST | 1.347 | 1.190–1.525 | <.0001 | 3.623 | 3.159–4.154 | <.0001 | | LDL | 1.003 | 1.002–1.005 | <.0001 | 1.001 | 0.999–1.002 | <.0001 | | HDL | 0.997 | 0.993–1.001 | <.0001 | 0.989 | 0.984–0.993 | <.0001 | | TG | 1.002 | 1.001–1.002 | <.0001 | 1.005 | 1.004–1.005 | <.0001 | | TC | 1.005 | 1.004–1.006 | <.0001 | 1.006 | 1.005–1.007 | <.0001 | | MetS | 7.966 | 7.139–8.889 | <.0001 | 12.45 | 10.88–14.24 | <.0001 | | model 2 b | model 2 b | model 2 b | model 2 b | model 2 b | model 2 b | model 2 b | | GGT | 1.010 | 1.007–1.013 | <.0001 | 1.024 | 1.021–1.027 | <.0001 | | AST | 1.015 | 1.009–1.021 | <.0001 | 0.979 | 0.970–0.988 | <.0001 | | ALT | 1.012 | 1.009–1.016 | <.0001 | 1.014 | 1.010–1.018 | <.0001 | | ALT/AST | 1.595 | 1.382–1.842 | <.0001 | 5.632 | 4.776–6.640 | <.0001 | | LDL | 1.002 | 1.000–1.003 | .040 | 0.999 | 0.997–1.000 | .151 | | HDL | 0.996 | 0.992–1.001 | .087 | 0.988 | 0.983–0.993 | <.0001 | | TG | 1.002 | 1.002–1.003 | <.0001 | 1.005 | 1.004–1.006 | <.0001 | | TC | 1.004 | 1.002–1.005 | <.0001 | 1.005 | 1.003–1.006 | <.0001 | | MetS | 6.833 | 6.100–7.654 | <.0001 | 10.67 | 9.268–12.28 | <.0001 | ## DISCUSSION In the current study, we observed a significant increase in all metabolic risk factors and liver enzymes, except for HDL‐C and AST, in both prediabetic and T2MD subjects, with the differences being more pronounced in diabetic individuals. In subjects with prediabetes and T2DM, the mean LDL, TG and TC levels were higher. Consistent with these findings, Dhoj et al. 14 demonstrated that diabetes is associated with a high prevalence of dyslipidaemia characterized by elevated levels of cholesterol, TG and LDL. Additionally, Jasim et al. 5 identified TG as one of the promising biomarkers for predicting prediabetes and T2DM. These findings support that diabetes patients are more susceptible to co‐occurring diseases such as hyperglycaemia, chronic renal failure, hypothyroidism and polypharmacy, with drugs known to have adverse effects on lipid profiles. Patients with diabetes must therefore be treated to prevent coronary artery disease. 15 Individual metabolic syndrome characteristics (such as higher BMI, waist circumference, DBP and SBP levels, among others) were associated with the prevalence of prediabetes and T2DM, according to the findings of this study. Thus, $80\%$ of subjects with T2DM and $72\%$ in the prediabetes group had MetS, whereas only $24\%$ of the control group exhibited metabolic syndrome symptoms. In addition, Ogedengbe et al. 16 found that the prevalence of MetS among T2DM patients is extremely high. This study revealed that liver enzymes, including ALT and GGT but not AST, and the ALT/AST ratio were significantly elevated in prediabetes and T2MD cases. However, some studies have found no correlation between elevated ALT and diabetes, possibly due to the ethnic diversity of the study populations. 6 Forlani et al. 17 reported a high prevalence of elevated ALT, AST and GGT levels in T2DM, which is consistent with our findings. Although there are no clear biological explanations for the relationships between liver indicators and glucose metabolism, one possible mechanism is that MetS and T2DM increase the risk of liver damage, increasing liver enzyme levels. 9 To reduce the risk of liver damage, prediabetics and diabetic patients may require a comprehensive clinical, laboratory and histological examination. In addition, GGT, ALT and the ALT/AST ratio, but not AST, can be used to identify prediabetes and diabetes based on ROC results. Among prediabetic and diabetic subjects, the GGT level has the highest areas under the curve (AUC) and the highest sensitivity compared to the control group. In contrast, logistic regression analysis revealed that higher levels of ALT, GGT and ALT/AST were independent risk factors for prediabetics and diabetics and that an increase in the ALT/AST ratio increased the risk of T2MD by 3.68‐fold, whereas lower AST levels were associated with the risk of diabetes. Sun‐Hye et al. 18 observed that higher levels of GGT and ALT and a lower AST/ALT ratio were independent risk factors for diabetes and impaired fasting glucose (IFG). Additionally, Zhao et al. 19 evidenced that the ALT/AST ratio may be a useful indicator of insulin resistance (IR) in the Chinese population. According to several studies, elevated GGT and ALT levels are also beneficial for identifying early markers of dysregulated glucose metabolism, which strongly correlate with prediabetes and diabetes. 20 A second proposed mechanism for the relationship between hepatic indices and glucose metabolism is that elevated serum ALT and GGT levels indicate hepatic steatosis, resulting in hepatic insulin resistance (IR). 18 IR is a risk factor for T2DM. 19 Therefore, it is unknown whether T2DM increases liver enzyme levels or whether elevated liver enzyme levels increase the risk of developing T2DM. Therefore, additional research is required to clarify these theories. In contrast to our findings, some studies have found that elevated GGT levels, but not ALT or AST, can be used to predict the onset of T2DM. 9 Sattar et al. 21 also demonstrated that elevated ALT levels within the ‘normal’ range predict diabetes independently of elevated AST levels. Although we did not examine the role of gender in transaminase levels in this study, a possible explanation for these contradictory findings may be that transaminase levels are gender‐specific, according to the findings of some studies. 22 Consequently, it appears that using the ratio of variables, such as ALT/AST, rather than each variable individually may be more effective in evaluating diabetes patients. ## CONCLUSION Our results indicated a significant increase in liver enzymes except AST, lipid profile except HDL‐C, and MetS status in both prediabetic and T2MD subjects, with the differences being more pronounced in diabetic individuals. On the one hand, these variables or their ratio may be considered predictive risk factors for diabetes, and on the other hand, they may be utilized as diagnostic factors. However, it is unknown whether T2DM increases liver enzyme levels or whether elevated liver enzyme levels increase the incidence of T2DM, and the pathophysiologic pathways underlying this association are unclear. Therefore, additional research is required to clarify these theories and validate their clinical applications. ## AUTHOR CONTRIBUTIONS N. M. designed and supervised the study. N. D. wrote the paper. S. SP., Z. R. and B. C. analysed data. All authors read and approved the final manuscript. ## CONFLICT OF INTEREST The authors declare no conflict of interest. ## ETHICAL APPROVAL This study was approved by the Ethics Committee of Ahvaz Jundishapur University of Medical Sciences (Ethical code: IR. AJUMS. REC.1398.455), and the informed consent was taken from all patients who participated in Hoveyzeh Cohort. ## DATA AVAILABILITY STATEMENT Data will be made available on request.
# Additional Active Movements Are Not Required for Strength Gains in the Untrained during Short-Term Whole-Body Electromyostimulation Training ## Abstract Recommendations for conventional strength training are well described, and the volume of research on whole-body electromyostimulation training (WB-EMS) is growing. The aim of the present study was to investigate whether active exercise movements during stimulation have a positive effect on strength gains. A total of 30 inactive subjects (28 completed the study) were randomly allocated into two training groups, the upper body group (UBG) and the lower body group (LBG). In the UBG ($$n = 15$$; age: 32 (25–36); body mass: 78.3 kg (53.1–114.3 kg)), WB-EMS was accompanied by exercise movements of the upper body and in the LBG ($$n = 13$$; age: 26 (20–35); body mass: 67.2 kg (47.4–100.3 kg)) by exercise movements of the lower body. Therefore, UBG served as a control when lower body strength was considered, and LBG served as a control when upper body strength was considered. Trunk exercises were performed under the same conditions in both groups. During the 20-min sessions, 12 repetitions were performed per exercise. In both groups, stimulation was performed with 350 μs wide square pulses at 85 Hz in biphasic mode, and stimulation intensity was 6–8 (scale 1–10). Isometric maximum strength was measured before and after the training (6 weeks set; one session/week) on 6 exercises for the upper body and 4 for the lower body. Isometric maximum strength was significantly higher after the EMS training in both groups in most test positions (UBG $p \leq 0.001$–0.031, $r = 0.88$–0.56; LBG $$p \leq 0.001$$–0.039, $r = 0.88$–0.57). Only for the left leg extension in the UBG ($$p \leq 0.100$$, $r = 0.43$) and for the biceps curl in the LBG ($$p \leq 0.221$$, $r = 0.34$) no changes were observed. Both groups showed similar absolute strength changes after EMS training. Body mass adjusted strength for the left arm pull increased more in the LBG group ($$p \leq 0.040$$, $r = 0.39$). Based on our results we conclude that concurring exercise movements during a short-term WB-EMS training period have no substantial influence on strength gains. People with health restrictions, beginners with no experience in strength training and people returning to training might be particularly suitable target groups, due to the low training effort. Supposedly, exercise movements become more relevant when initial adaptations to training are exhausted. ## 1. Introduction Whole-body electromyostimulation (WB-EMS) is a training method that can complement or to some extent replace traditional resistance training, as it can be used alone, superimposed, or combined (different training time points). Since several electrodes are used [1], different muscles can be stimulated at the same time [2]. Strength improvements can be achieved with both high-intensity resistance training and WB-EMS [3]. Previous studies have shown that WB-EMS is applicable in healthy people [4] and also in patients, e.g., in people suffering from Parkinson [5] or sarcopenic obesity [6]. In conventional resistance training, the one repetition maximum (1-RM) is used to describe the training intensity [7]. Since it represents the maximal voluntary contraction, a comparison between electromyostimulation (EMS) and normal contraction is possible [8]. A low-cost way to determine the intensity of strength training is to capture the perceived exertion using the Borg scale [9], which is also used in WB-EMS training [10,11]. In contrast to the 1-RM, where voluntary force production under external load is recorded, the perceived exertion reflects the internal load. For beginners in conventional strength training, at least 2 training sessions per week are recommended. Both multi-joint and single-joint exercises can be performed, using a variety of equipment and the own body weight. Per set, 8 to 12 repetitions should be completed at 60–$70\%$ of the repetition maximum [12]. To provide a safe and effective application of WB-EMS, guidelines recommend restricting the duration of one session to a maximum of 20 min. Moreover, the frequency should be limited to one session a week for at least the first eight weeks or a minimum interval of four days should be maintained thereafter [13]. Perceived exertion should be rated approximately as “hard” to “hard+” (lower during initial training) [13], corresponding to 5 to 6 on the Borg CR 10 scale [14]. Nevertheless, in some trials, the training frequencies were higher [2,15], and sometimes lower with one session a week [16,17] compared to the aforementioned recommendation after familiarization. The aggregated training stimulus consists of the number of sessions a week and the length of the training period. Usually, eight sessions or more have been conducted in strength related WB-EMS studies with healthy subjects [10,18]. Early strength improvements due to strength training can be attributed mainly to neural factors. From the third to fifth week on, strength development is mainly caused by hypertrophy [19]. Increases after very few sessions (as seen after three training sessions) are supposedly attributable to lower antagonist activity or motoric improvements of synergists [20]. Elgueta-Cancino and colleagues [21] elicited less inhibitory activity in the cortex, higher corticospinal excitability, and altered motor unit activation as assumed mechanisms of initial strength gain. Muscle growth and strength gain can also be achieved by compact training (eight weeks with three sessions a week) with neuromuscular electrical stimulation [22]. Similar to conventional strength training early strength gains owing to EMS-training are achieved without muscle growth [23]. The body of research on WB-EMS training is growing [24]. EMS can be superimposed on maximum or sub-maximum voluntary dynamic or isometric contractions or applied without any concomitant voluntary contraction. Nevertheless, little is known about the importance of active exercise movements during stimulation. Strength gains due to EMS with exercise movements were previously shown [25,26] and some authors addressed the impact of EMS superimposed on intense strength training [27,28]. To our knowledge, only Kemmler and colleagues [29] investigated the effects of smaller, WB-EMS accompanying movements. In this randomized controlled trial (RCT), participants trained once a week for 12 weeks. However, only older females with little muscle mass were included for the comparison between dynamic use (movements during stimulation) and passive use (only isometric contractions during stimulation) limiting the generalizability of the results obtained. Therefore, the present study aims to investigate whether active exercise movements during stimulation have a positive effect on strength gains of selected upper and lower body muscles in young healthy subjects of both sexes in training sessions using mobile, easily accessible fitness equipment, or the own body mass. We hypothesized that WB-EMS combined with concurrent exercise movements will result in higher strength gains than WB-EMS alone. Hence, this study was designed to clarify whether movement sequences are necessary for strength gains during WB-EMS or, whether the electrostimulation alone induces strength gains. The results might help fitness professionals and EMS-users to optimize recommendations for WB-EMS training depending on individual goals and requirements. ## 2.1. Subjects The number of subjects to be included in the study was determined using an a priori sample size calculation for statistical comparison of the means of two unpaired groups (using the program GPower 3.1) based on the mean of the effect sizes (Δ strength leg extension: $d = 1.67$; Δ strength leg flexion: $d = 0.79$) reported by Kemmler and colleagues [29]. This study is similar to the present study. A predefined lower limit of statistical power of $80\%$ and anα error probability of 0.05 were assumed. A dropout rate of $20\%$ was further added. Based on the results of this calculation, a total of 30 subjects were initially recruited for participation. Subjects were included when being aged between 20 and 40 years and having abstained from physical activity for at least six months prior to the start of the study. Access was possible for both sexes. Subjects were excluded when acute injuries or physical complaints were reported or when contraindications as listed by Kemmler and colleagues [30] or Stöllberger and Finsterer [31] were present (e.g., epilepsy, bleeding disorders). No other exclusion criteria were defined (e.g., BMI, VO2max). The study was conducted in accordance with the principles of the Declaration of Helsinki [32] and approved by the ethics committee of the University of Wuppertal (MS/BBL 200114 Wehmeier). All subjects signed a written consent to participate in the study. ## 2.2. Experimental Design The procedure was based on a randomized controlled trial design (Figure 1). Subjects were randomly assigned to two training groups (with the program RandList 1.2), with the number of subjects in both groups being equal. In the upper body group (UBG), WB-EMS was accompanied by exercise movements of the upper body only and in the lower body group (LBG) by exercise movements of the lower body only. Therefore, the UBG served as a control when lower body strength is considered, and the LBG served as a control when upper body strength is considered. With this design, WB-EMS without exercises and WB-EMS with exercises could be compared. Intervention duration was set to six weeks, training frequency to one session/week, and the duration of the training session to 20 min. Before and after the training period, maximum force was determined during various exercises. Blinding of subjects was not possible because the intervention is identifiable. Blinding of the investigator was not applicable because the training instructions and the test instructions were given by the same person, a professional EMS trainer with a bachelor’s degree in sports science. Subjects were asked to maintain their dietary habits and to keep their physical activity levels constant, which also meant avoiding additional physical activity. All interventions and measurements were conducted in an EMS studio (go!Orange—Studio für EMS, Remscheid, Germany). ## 2.3. WB-EMS Procedure Both the UBG and the LBG received the same WB-EMS application (miha bodytec II; miha bodytec GmbH, Gersthofen, Germany) once a week. Subjects wore thin tight-fitting underwear. The vest with wetted electrodes was placed on the upper body and the wetted electrode bands on the arms, buttocks, and legs (miha bodytec). During the 20-min training, the upper and lower back, abdominal muscles, buttocks, muscles around the thigh, chest, and muscles around the upper arm were stimulated with 85 Hz of 350 μs wide rectangular pulses in biphasic mode. Both the duration of the pulse interval (stimulation on) and the pulse pause (stimulation off) were set to 4 s. The pulses were ramped up to the targeted intensity without delay (full intensity directly available) and similarly ramped down to zero (direct interruption of the stimulation) at the end of the stimulation phase. To maintain the same conditions, the stimulation intensity was adjusted to 6–8 on a scale of 1 (hardly noticeable) to 10 (painful) [33]. Regardless of group affiliation, muscles were voluntarily tensed during the stimulation episode. ## 2.4. Exercise Procedure Both groups received WB-EMS and performed exercises meanwhile. ( Supplementary Figures S1–S3). The UBG used upper body exercise movements (chest and upper back including shoulders and arms) and the LBG used lower body exercise movements (buttocks and thigh muscles including abductors and adductors). The UBG training consisted of rowing, butterfly reverse, latissimus pulls, pushups, butterfly, biceps curls, and triceps pulldowns. The LBG training consisted of squats, lunges, adductions, abductions, hip lifts, and leg raises. Both groups exercised the trunk (abdomen and lower back) with back extensions, crunches, and oblique crunches. Selected exercises were performed with additional fitness equipment (fitness tubes and elastic bands, each with varying resistance, and a Swiss Ball). During the first 1 to 2 sessions (depending on the training level), subjects maintained the position over the period of stimulation that they had taken at the onset of the stimulus. One set of 12 repetitions was performed per exercise, with each repetition beginning with the onset of the pulse. To maintain the same physical load level, i.e.,16 to 17 on the Borg RPE scale [33], the number of movements during an impulse interval could be increased up to three. If the training stimulus was not sufficient after the aforementioned customization, the originally targeted static exercise position should be maintained during the interval break. However, overexertion led to a backward correction. Another way to increase the intensity to the desired level was to increase the resistance either by giving an additional fitness tube or rubber band, or by using a version that offered more resistance. ## 2.5. Isometric Strength Testing Procedure Isometric maximum strength (N) was determined during 10 different exercises (arm adduction, arm pull, leg extension, and leg curl, each unilateral left and unilateral right, as well as during biceps curl and triceps pulldown, each bilateral) in standardized positions (Supplementary Figure S4) pre (initial measurement) and post (final measurement) intervention using a mobile device (KD 9363 including DMS measuring amplifier GVS-2; ME-measuring systems GmbH, Hennigsdorf, Germany), which was more practicable than the determination of the 1-RM. Reliability of the isometric maximum strength measurement method was verified by Runkel and colleagues for several test positions (triceps pulldown, biceps curl, arm pull, sit-up, leg curl, leg extension) in healthy subjects with a comparable body mass index [34] by a high interclass correlation coefficient ($r = 0.764$ to 0.934). At both time points, the tests were performed three times in each position. The pause was set to 10 seconds between individual tests. In each case, the maximum value was used for analysis. The whole testing procedure lasted approximately 20 min. ## 2.6. Statistical Analysis Due to the presence of some discordant values (see box plots), skewed distribution in some cases (Shapiro–Wilk test), partial heterogeneity of error variances (Levene’s test), and partial heterogeneity of covariances (Box test), nonparametric statistical tests were employed. The differences between the initial and the final maximum isometric strength were determined separately for each group using the Wilcoxon test. The initial and the final values were compared between the groups using the Mann–Whitney U test. Absolute differences were calculated by subtracting the initial values from the final values, and relative differences were calculated by dividing the final values by the initial values (the initial value was set to $100\%$). Group comparisons were performed using the Mann–Whitney U test for absolute and relative differences. The significance level was set to < 0.05. Two-tailed analyses were used. The results of the non-parametric tests were used to calculate the effect sizes [35]. A distinction was made between large effects (r ≥ 0.5), medium effects (< 0.5 to 0.3), and small effects (< 0.3 to 0.1) [36]. Statistics were calculated using SPSS (IBM SPSS Statistics for Windows, Version 28.0., IBM Corp., Armonk, NY, USA) and Excel (Microsoft Excel for Windows, 16.0., Microsoft Corp., Redmond, WA, USA). An intention-to-treat analysis was not possible due to dropouts occurring at baseline. ## 3. Results Of the included subjects, 28 completed the study. The dropouts occurred due to personal reasons. The characteristics of the groups did not differ significantly from each other (Table 1) and the total training volume was similar in both groups. Most subjects ($$n = 9$$ in each group) completed five sessions and no adverse effects occurred. The body mass remained unchanged in both the UBG and the LBG (Table 1). Neither the initial nor the final values differed significantly between the two groups. Isometric maximum strength was significantly higher after EMS training in both groups, both in absolute terms (Table 2 UBG; Table 3 LBG) and body mass adjusted (N/kg), except for left leg extension in the UBG and biceps curl in the LBG. The changes in absolute strength were similar in both groups (Table 4). Body mass adjusted strength during left arm pull showed a higher increase in the LBG (Figure 2). In the other test positions, group affiliation made no difference (Figure 2, Figure 3 and Figure 4). Furthermore, the LBG achieved a higher percentage strength gain in left arm pull, both absolute (Table 4) and body mass adjusted (UBG median $114.25\%$ vs. LBG median $137.05\%$; $$p \leq 0.020$$; $r = 0.44$). ## 4.1. Overview Significant strength changes were observed in both groups after about five weeks training (one session per week). The percentage differences between the initial and final tests were higher than those found in the reliability analysis of the test device by Runkel and colleagues [34]. Therefore, the changes could be attributed to training. LBG training improved left arm pull strength more than UBG training. However, there were no group differences in the other exercises. Initial values between the two groups were not significantly different, but possibly at clinically relevant levels. If the higher initial values had been due to differences in training history, a lower ability to further increase strength would have be needed to be considered [37]. However, subjects should have abstained from intense physical activity for at least six months before starting the study. ## 4.2. Accompanying Voluntary Activity Little is known about the effects of movements for strength gain during EMS. During local application, movements are usually avoided and isometric contractions are performed. Maffiuletti [38] summarized that there are no differences in strength increase between EMS and EMS superimposed on voluntary contractions. However, the conclusion is based on the results of isometric interventions. Although movements are thought to promote the activity of stimulated muscles [26], our results failed to show a consistent influence of active exercise movements on strength gains. Furthermore, strength gains from conventional resistance training depend, among others, on the range of motion used [39,40]. However, isometric contractions at multiple joint angles might cover at least in part the physiological range of motion. For EMS training, Maffiuletti [38] recommends changing the joint position and furthermore, changing the electrode positioning to increase recruitment. Admittedly, Kemmler and colleagues [29] demonstrated the benefit of movement during WB-EMS use, with participants exercising in supine position. In contrast, our participants performed exercises in different positions. Therefore, any movements of body parts that were not primarily intended for the exercises and possible differences in resistance to gravity might have influenced the results. Furthermore, it needs to be considered that additional fitness equipment (fitness tubes and elastic bands with different resistance as well as a Swiss Ball) was used for selected exercises. However, exercise movements using additional fitness equipment did not affect the results. In addition, both the UBG and LBG performed exercises for the trunk. Therefore, both groups received partially similar dynamic training stimuli (three exercises). Movements inevitably lead to changes in muscle length and shape (e.g., biceps muscle during curl). Hence, changes in the electrode contact were very likely to occur. Furthermore, training that aims to enhance endurance and strength at the same time, such as EMS superimposed on cycling [41,42], requires movements. However, stimulation intensity must be considered to ensure the range of motion [43]. ## 4.3. Training Models and Adaptations Supraspinal mechanisms appear to be responsible for the initial strength development through EMS training [23]. Bezerra and colleagues [44] showed increased strength after EMS superimposed onto maximum isometric quadriceps contractions, not only of the exercised leg but also of the unexercised leg, confirming neural contribution. The potential to use EMS for rapid strength gains was demonstrated by Deley and colleagues [45], who reported that maximum dynamic leg extension torque in prepubertal girls could be increased by up to $50.6\%$ with three weekly isometric applications over a three-week period. According to Adams [46], atrophic patients as well as casualties are target groups for the use of EMS. After 5 to 6 weeks, a 10 to $15\%$ enhancement of muscle function can be achieved, but three sessions a week are recommended. Several studies confirmed the impact of WB-EMS on strength [10,26]. However, to our knowledge, only Kemmler and colleagues [29] have studied the effects of exercise during WB-EMS to date. In most cases, the lower body was investigated. Von Stengel and Kemmler [25] showed that leg/hip strength can be improved with 1.5 WB-EMS training sessions (with unloaded, low effort exercises) per week over a 14 to 16 week period, regardless of age. Furthermore, strength gains due to unloaded WB-EMS were similar compared to a HIT training after 16 weeks with three sessions in two weeks [3]. An increase in strength was also observed after shorter training periods. For example, WB-EMS superimposed on jumps twice a week over seven weeks significantly improved leg strength in contrast to normal jump training [10,47,48]. In the study by Wirtz and colleagues [28], leg flexors strength increased only after combining stimulation of multiple body parts with loaded squats ($100\%$ 10 RM) twice per week and it was higher three weeks after the six-week training compared to the same training without stimulation. Dörmann and colleagues [18] showed significant improvements in leg strength after a four-week, eight-session WB-EMS training program that were similar to those seen in the control group, which performed the same training that included strength exercises, without additional stimulation, and in which intensification was accomplished using other training tools. However, not only leg muscles but also upper body muscles could benefit from dynamic WB-EMS. Reljic and colleagues [26] observed improvements throughout the entire body after a 12-week WB-EMS program with slight motions, consisting of two sessions per week. Our results suggest that even fewer training sessions are beneficial than previously described, whether or not exercise movements are performed during stimulation, which appears to be due to neural factors. Therefore, not only locally applied EMS training regimens have the potential to increase strength, but also WB-EMS training regimens without additional exercise movements. ## 4.4. Transferability Benefits from WB-EMS can also be expected, for example, for patients suffering from sarcopenia, sarcopenic obesity, and low back pain [14]. It might be useful especially for beginners to start WB-EMS training with a five-week training period without additional exercise movements to improve basic strength before starting a more challenging exercise program. WB-EMS without additional exercise movements can be a first access to training when health conditions do not allow conventional exercises or when a lack of compliance exists. Relative to WB-EMS, local application appears to be superior in gaining strength [49]. However, the lack of focus on selected zones owing to stimulation of the entire body is a suggested explanation for the difference [14]. Therefore, only target muscles could be stimulated and not all available electrodes could be used, even if an electrode suit is worn, or zones could be stimulated in an individual order. ## 4.5. Limitations We have shown that the effect of WB-EMS on strength gains is independent of the concomitant exercise movements. Nevertheless, some limitations need to be acknowledged. A test of core strength would have been useful, as both groups performed core strength exercises under the same conditions and a higher strength can be expected as observed in the study by Berger and colleagues [1], although they used a more extensive training program. Owing to two dropouts, the group sizes were slightly different, which affected the comparison. Furthermore, the strength gains of the dynamically trained muscles might have been underestimated, since only isometric strength was tested. It must also be mentioned that the increase in strength might have been influenced by deviations from the predefined number of training sessions. To evaluate the intensity of the movement sequences, an unstimulated group could have been used. Furthermore, an inactive group could have been used as a reference for the interventions. However, the study focused on the comparison between the EMS application without and the application with concurring exercise movements. When using WB-EMS training technology, the load parameters must be set with care to avoid unintended side effects, particularly during the first sessions of novices when adaptation to the load has not yet occurred in the form of the “repeated bout effect” [50]. ## 5. Conclusions WB-EMS training without accompanying movement exercises leads to substantial strength gains even during a short WB-EMS training period. At the beginning of WB-EMS training, electromyostimulation is more important for strength gains than active exercise movements. Therefore, future studies should examine the effects of exercise movements during long-term training periods, or consider individuals already adapted to WB-EMS training or strength training. The transferability of the results to a collective experienced with WB-EMS or strength training should be questioned, as movements (and maybe other approaches, e.g., additional mass or complicating tasks) may become more relevant when initial adaptations to training are exhausted. Since the training effort with WB-EMS is low, people with health restrictions, beginners without experience in strength training, and those returning to training might benefit from these results. These groups could refrain from exercise movements during the first WB-EMS training sessions and integrate them during the course of the subsequent training.
# Investigating the relationship between haematological parameters and metabolic syndrome: A population‐based study ## Abstract Chronic inflammation plays a role in Metabolic syndrome (MetS); hematologic inflammatory parameters can be used as MetS predicting factors. Adults with high WBC count, RDW, MHR, and NHR without any associated underlying chronic disease must be screened because they are at high risk of developing MetS. ### Background Metabolic syndrome (MetS) is a global public health concern. Chronic inflammation plays a role in MetS; haematological inflammatory parameters can be used as MetS predicting factors. ### Objective Hereditary and environmental factors play an important role in the development of MetS. This study aimed to determine the relationship between haematological parameters and MetS in the adult population of southeastern Iran, Kerman. ### Methods This cross‐sectional study was a sub‐analysis of 1033 subjects who participated in the second phase of the Kerman Coronary Artery Disease Risk Factor Study (KERCADRS). Metabolic syndrome was diagnosed according to Adult Treatment Panel III (ATP III) definition. Pearson correlation coefficient was used to investigate the relationship between haematological parameters with age and components of metabolic syndrome. The role of WBC, neutrophil, lymphocyte and monocyte in predicting metabolic syndrome was evaluated using the receiver operating characteristic (ROC) curve. ### Results White blood cell (WBC) and its subcomponent cells count, red cell distribution width (RDW), monocyte to HDL ratio (MHR) and Neutrophil to HDL ratio (NHR) had a significant positive correlation with the severity of MetS. The cut‐off value of WBC was 6.1 (×103/μL), the sensitivity was $70\%$, the specificity was $52.9\%$ for females, the cut‐off value of WBC was 6.3 (×103/μL), the sensitivity was $68.2\%$ and the specificity was $46.7\%$, for males. ### Conclusion WBC and its subcomponent count, RDW, MHR and NHR parameters are valuable biomarkers for further risk appraisal of MetS in adults. These markers are helpful in early diagnoses of individuals with MetS. ## INTRODUCTION Metabolic syndrome (MetS) has had different definitions since 1988, which was first introduced by Reaven. 1 Based on the latest definition of this syndrome, MetS includes at least three factors from the following disorders: central obesity, hypertension, elevated fasting glucose and dyslipidemia (reduced high‐density lipoprotein (HDL) or elevated triglycerides (TG)), 2 which increases the risk of insulin resistance, diabetes mellitus, cerebrovascular disease, cardiovascular disease, common cancers, osteoporosis and total mortality. 3, 4, 5 Prevalence and incidence of MetS have increased significantly following the increase in urbanization, improper nutrition and lack of physical activity, and it has become a global health concern. 6, 7 Although the underlying mechanism of MetS has not been known yet, oxidative stress, chronic inflammation and insulin resistance seem to be the most likely mechanism. 8 A growing number of studies emphasize the association of MetS components and haematological parameters, including white blood cell (WBC), red blood cell (RBC), platelet (PLT) count and haematocrit (HCT) level as potential indicators markers of thrombotic and inflammatory states. 9, 10, 11, 12 Meng et al. demonstrated that leukocyte was a good marker for assessing the risk of MetS and cardiovascular disease. 13 Some studies reported that WBC and PLT counts were significantly correlated with the numbers of MetS components. 14, 15 Ahmadzadeh et al. pointed out that high haemoglobin (HB) levels and HCT can also indicate MetS development. 16 Since inflammation plays a role in MetS, these haematological inflammatory parameters can be used as MetS predicting factors. Performing cost‐effective CBC tests can easily measure haematological parameters from peripheral blood. Hereditary and environmental factors play an important role in the development of MetS. Currently, no study has investigated the characteristics of MetS and its relationship with blood parameters in the population of southeastern Iran. Therefore, this study aimed to determine the relationship between haematological parameters and MetS in southeastern Iran, Kerman. ## Study design and participants This cross‐sectional study was a sub‐analysis of 1033 subjects who participated in the second phase of the Kerman Coronary Artery Disease Risk Factor Study (KERCADRS). 17 The sampling method was a cluster from the entire population of Kerman residents. In the first phase of the KERCADRS, according to the post‐office list of city residents, 250 postal codes were selected randomly. We invited people over 15 to participate in the study. In the first phase, 24 people were collected in each cluster. In the second phase (420 clusters including 24 participants), people were contacted again, and 1033 who met the inclusion criteria from February 2017 to October 2018 were included in our study (Figure 1). None of the included participants had a history of chronic infectious or inflammatory diseases or the use of any drugs known to affect haematological parameters or lipoprotein metabolism. More details about the data collection method have been published in the study of Najafipour et al. 17 **FIGURE 1:** *The flowchart of included participants.* ## Data collection After obtaining informed consent forms from the subjects, demographic data (age, gender) and anthropometric information were collected. A trained interviewer asked participants about cigarette smoking and opium use. People who routinely smoked cigarettes or consumed opium at the time of data collection were considered cigarette smokers and opium addicts, respectively. Height in the standing position without shoes, from heel to head, with an error of 0.5 cm error, weight without shoes and extra clothing with an error of 100 g on a digital scale, body mass index (BMI) which the weight (kg) of the patients was divided by the square of their height (m2), waist circumference (WC) in the standing position with 20–30 cm distance between the feet were measured. WC (cm) was measured at the umbilical level. Hip circumference (HC) (cm) was measured based on the largest circumference around the buttocks. Waist‐to‐hip ratio (WHR) was calculated by dividing WC by HC. After 10 min of rest, blood pressure (BP) was measured with a standard manometer from the right arm in the sitting position according to the World Health Organization (WHO) standards, and the blood samples were taken after 12–14 hours of fasting and kept at room temperature. CBC, fasting plasma glucose (FPG) and serum lipids (HDL cholesterol and TG) were tested by routine laboratory methods. According to Adult Treatment Panel III (ATP III) definition, the presence of at least two of the following five factors is required for the diagnosis of metabolic syndrome: blood pressure over $\frac{130}{80}$ mm Hg or consumption of antihypertensive drugs, TG level over 150 mg/dl, FPG over 100 mg/dl or consumption of anti‐diabetic medication like insulin, HDL cholesterol level less than 40 mg/dL (men) or 50 mg/dl (women) and WC over 102 cm (men) or 88 cm (women). ## Sample size estimation In the study of Oda and Kawai, the mean WBC in women with three components of metabolic syndrome was 5416 ± 1163 and in women with only two components of metabolic syndrome was 5077 ± 1358. 18 *The minimum* sample size required based on the mentioned numbers and considering the power of 0.8 and alpha of 0.05 for each group was considered to be at least 275 people. ## Statistical analysis Statistical analysis was performed using SPSS version 16 software (SPSS Inc.). Quantitative variables were reported as mean ± standard deviation, and qualitative variables were reported as numbers and percentages. Qualitative variables were compared between the two groups using the Pearson chi‐square or Fisher's exact test. Quantitative variables were compared separately between individuals with and without metabolic syndrome in male and female groups using the independent samples t test. Pearson correlation coefficient was used to investigate the relationship between haematological parameters with age and components of metabolic syndrome. The relationship between haematological parameters and the variable number of components of metabolic syndrome was investigated using Spearman's correlation coefficient. The role of WBC, neutrophil, lymphocyte and monocyte in predicting metabolic syndrome was evaluated using the receiver operating characteristic (ROC) curve and with MedCalc® Statistical Software version 20.013 (MedCalc Software Ltd). The optimal cut‐off point was determined using the Youden index. ## RESULTS A total of 1033 individuals (660 women, 373 men) were included in this study, and the sociodemographic, laboratory parameters and clinical characteristics of the participants are summarized in Table 1. **TABLE 1** | Unnamed: 0 | Male | Male.1 | p value a | Female | Female.1 | p value a.1 | | --- | --- | --- | --- | --- | --- | --- | | | Normal | Syndrome Metabolic | p value a | Normal | Syndrome Metabolic | p value a | | | N = 291 | N = 82 | p value a | N = 463 | N = 197 | p value a | | Age (years) | 45.48 ± 16.21 | 51.90 ± 13.84 | <.001 | 40.29 ± 14.07 | 53.33 ± 11.72 | <.001 | | Smoking, n (%) | 69 (23.7) | 18 (22.0) | .739 | 3 (0.6) | 1 (0.5) | .655 | | Opium addiction, n (%) | 64 (22.0) | 17 (20.7) | .807 | 21 (4.5%) | 15 (7.6%) | .111 | | BMI (kg/m2) | 24.99 ± 4.22 | 29.18 ± 3.86 | <.001 | 26.38 ± 4.86 | 30.81 ± 5.17 | <.001 | | WC (cm) | 88.26 ± 11.94 | 101.29 ± 9.37 | <.001 | 83.30 ± 11.83 | 98.16 ± 10.47 | <.001 | | WHR | 0.89 ± 0.07 | 0.96 ± 0.05 | <.001 | 0.82 ± 0.08 | 0.93 ± 0.08 | <.001 | | SBP (mmHg) | 114.98 ± 16.01 | 127.99 ± 16.92 | <.001 | 108.98 ± 15.56 | 122.60 ± 17.43 | <.001 | | DBP (mmHg) | 74.98 ± 9.39 | 82.38 ± 11.28 | <.001 | 71.79 ± 10.46 | 78.11 ± 9.46 | <.001 | | FPG | 90.86 ± 24.05 | 116.20 ± 41.39 | <.001 | 86.67 ± 18.34 | 120.40 ± 47.38 | <.001 | | TG (mg/dl) | 127.03 ± 66.25 | 221.15 ± 163.26 | <.001 | 100.11 ± 43.90 | 184.71 ± 78.75 | <.001 | | HDL (mg/dl) | 46.43 ± 10.55 | 37.87 ± 7.27 | <.001 | 52.97 ± 12.41 | 44.31 ± 9.57 | <.001 | | LDL (mg/dl) | 109.31 ± 31.35 | 101.07 ± 37.57 | .077 | 109.29 ± 35.07 | 113.65 ± 40.42 | .188 | | Cholestrol (mg/dl) | 181.08 ± 37.04 | 181.66 ± 43.56 | .912 | 182.07 ± 43.53 | 194.74 ± 45.56 | .001 | | WBC (×103/μL) | 6.76 ± 1.68 | 7.08 ± 1.68 | .131 | 6.30 ± 1.66 | 6.92 ± 1.52 | <.001 | | Neutrophil (×103/μL) | 3.46 ± 1.25 | 3.72 ± 1.38 | .119 | 3.35 ± 1.21 | 3.70 ± 1.12 | .001 | | Lymphocyte (×103/μL) | 2.48 ± 0.73 | 2.51 ± 0.73 | .793 | 2.24 ± 0.66 | 2.46 ± 0.68 | <.001 | | Monocyte (×103/μL) | 0.58 ± 0.17 | 0.61 ± 0.16 | .174 | 0.51 ± 0.14 | 0.54 ± 0.15 | .005 | | RBC (×106/μL) | 5.33 ± 0.54 | 5.35 ± 0.60 | .745 | 4.78 ± 0.47 | 4.78 ± 0.49 | .951 | | HB (gr/dl) | 15.19 ± 1.26 | 15.33 ± 1.28 | .378 | 13.24 ± 1.18 | 13.40 ± 1.38 | .177 | | HCT (%) | 46.04 ± 3.51 | 45.87 ± 4.22 | .741 | 41.08 ± 3.57 | 41.31 ± 4.23 | .516 | | PLT(×103/μL) | 220.67 ± 49.95 | 215.24 ± 45.32 | .376 | 254.18 ± 62.34 | 255.26 ± 56.46 | .834 | | MPV (fL) | 10.27 ± 0.87 | 10.22 ± 0.77 | .640 | 10.54 ± 0.94 | 10.47 ± 0.90 | .355 | | RDW‐SD | 42.97 ± 3.14 | 42.76 ± 3.24 | .602 | 42.94 ± 2.98 | 43.56 ± 3.10 | .017 | | RDW‐CV | 13.89 ± 1.35 | 13.94 ± 1.26 | .787 | 14.02 ± 1.34 | 14.16 ± 1.39 | .242 | | NLR | 1.49 ± 0.68 | 1.59 ± 0.76 | .233 | 1.57 ± 0.62 | 1.61 ± 0.67 | .539 | | PLR | 94.85 ± 31.90 | 91.59 ± 28.99 | .405 | 120.76 ± 40.62 | 110.52 ± 37.39 | .003 | | PMR | 403.18 ± 128.29 | 367.38 ± 96.79 | .007 | 529.20 ± 169.01 | 499.46 ± 162.80 | .037 | | MHR | 0.013 ± 0.005 | 0.0176 ± 0.01 | <.001 | 0.010 ± 0.004 | 0.013 ± 0.004 | <.001 | | NHR | 0.08 ± 0.04 | 0.10 ± 0.04 | <.001 | 0.07 ± 0.03 | 0.09 ± 0.03 | <.001 | In both males and females, in the participants with MetS, age, BMI, WC, WHR, systolic blood pressure (SBP), diastolic blood pressure (DBP), FPG, TG, monocyte to HDL ratio (MHR) and Neutrophil to HDL ratio (NHR) were significantly higher compared with the participants without MetS. In females with MetS, WBC, Red Cell distribution width‐standard deviation (RDW‐SD), Neutrophil, Lymphocyte and Monocyte were significantly higher than the females without MetS (Table 1). HDL and platelet to Monocyte ratio (PMR) were significantly lower in participants with MetS compared with those without MetS in males and females. In females with MetS, the platelet to Lymphocyte ratio (PLR) was significantly lower than those without MetS (Table 1). In males with MetS, smoking, opium addiction, WC, WHR, SBP, DBP, monocyte, RBC, HB, HCT, MHR and NHR were significantly higher than these parameters in females with MetS. In females with MetS, BMI, HDL, LDL, cholesterol, PLT, MPV, PLR and PMR were significantly higher than these parameters in males with MetS (Table 2). **TABLE 2** | Unnamed: 0 | Syndrome Metabolic | Syndrome Metabolic.1 | p Value a | | --- | --- | --- | --- | | | Male | Female | | | | N = 82 | N = 197 | | | Age (years) | 51.90 ± 13.84 | 53.33 ± 11.72 | 0.381 | | Smoking, n (%) | 18 (22.0) | 1 (0.5) | <0.001 | | Opium addiction, n (%) | 17 (20.7) | 15 (7.6%) | 0.002 | | BMI (kg/m2) | 29.18 ± 3.86 | 30.81 ± 5.17 | 0.011 | | WC (cm) | 101.29 ± 9.37 | 98.16 ± 10.47 | 0.020 | | WHR | 0.96 ± 0.05 | 0.93 ± 0.08 | 0.001 | | SBP (mmHg) | 127.99 ± 16.92 | 122.60 ± 17.43 | 0.019 | | DBP (mmHg) | 82.38 ± 11.28 | 78.11 ± 9.46 | 0.001 | | FPG | 116.20 ± 41.39 | 120.40 ± 47.38 | 0.485 | | TG (mg/dl) | 221.15 ± 163.26 | 184.71 ± 78.75 | 0.057 | | HDL (mg/dl) | 37.87 ± 7.27 | 44.31 ± 9.57 | <0.001 | | LDL (mg/dl) | 101.07 ± 37.57 | 113.65 ± 40.42 | 0.018 | | Cholestrol (mg/dl) | 181.66 ± 43.56 | 194.74 ± 45.56 | 0.028 | | WBC (×103/μL) | 7.08 ± 1.68 | 6.92 ± 1.52 | 0.434 | | Neutrophil (×103/μL) | 3.72 ± 1.38 | 3.70 ± 1.12 | 0.935 | | Lymphocyte (×103/μL) | 2.51 ± 0.73 | 2.46 ± 0.68 | 0.640 | | Monocyte (×103/μL) | 0.61 ± 0.16 | 0.54 ± 0.15 | 0.001 | | RBC (×106/μL) | 5.35 ± 0.60 | 4.78 ± 0.49 | <0.001 | | HB (gr/dl) | 15.33 ± 1.28 | 13.40 ± 1.38 | <0.001 | | HCT (%) | 45.87 ± 4.22 | 41.31 ± 4.23 | <0.001 | | PLT(×103/μL) | 215.24 ± 45.32 | 255.26 ± 56.46 | <0.001 | | MPV (fL) | 10.22 ± 0.77 | 10.47 ± 0.90 | 0.030 | | RDW‐SD | 42.76 ± 3.24 | 43.56 ± 3.10 | 0.056 | | RDW‐CV | 13.94 ± 1.26 | 14.16 ± 1.39 | 0.212 | | NLR | 1.59 ± 0.76 | 1.61 ± 0.67 | 0.883 | | PLR | 91.59 ± 28.99 | 110.52 ± 37.39 | <0.001 | | PMR | 367.38 ± 96.79 | 499.46 ± 162.80 | <0.001 | | MHR | 0.0176 ± 0.01 | 0.013 ± 0.004 | <0.001 | | NHR | 0.10 ± 0.04 | 0.09 ± 0.03 | 0.002 | As shown in Table 3, we have considered the number of metabolic components as a measure to determine the severity of MetS, WBC count, Neutrophil, Lymphocyte, Monocyte, RDW‐SD, Red cell distribution width‐coefficient of variation (RDW‐CV), PLR, MHR and NHR parameters that were significantly correlated with the severity of MetS. The correlation was positive in mentioned parameters except for PLR. WBC was significantly correlated with all metabolic components except age (Table 3). **TABLE 3** | Variables | Age (years) b | WC (cm) b | FPG (mg/dl) b | TG (mg/dl) b | HDL (mg/dl) b | SBP (mmHg) b | DBP (mmHg) b | Number of components a | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | WBC (×103/μL) | −0.001 | 0.139** | 0.095** | 0.165** | −0.161** | 0.080* | 0.069* | 0.191 | | Neutrophil (×103/μL) | −0.029 | 0.122** | 0.081** | 0.095** | −0.112** | 0.049 | 0.054 | 0.164** | | Lymphocyte (×103/μL) | 0.022 | 0.087** | 0.069* | 0.188** | −0.138** | 0.074* | 0.047 | 0.150** | | Monocyte (×103/μL) | 0.059 | 0.113** | 0.045 | 0.089** | −0.139** | 0.086** | 0.063* | 0.089** | | RBC (×106/μL) | −0.027 | 0.097** | −0.021 | 0.132** | −0.128** | 0.154** | 0.176** | −0.029 | | HB (gr/dl) | 0.074* | 0.130** | −0.001 | 0.178** | −0.142** | 0.150** | 0.161** | −0.026 | | HCT (%) | 0.123** | 0.136** | −0.044 | 0.142** | −0.053 | 0.173** | 0.188** | −0.038 | | PLT (×103/μL) | −0.099** | −0.035 | −0.026 | −0.005 | 0.117** | −0.015 | 0.025 | 0.034 | | MPV (fL) | −0.037 | −0.031 | 0.059 | −0.085** | −0.038 | −0.075* | −0.092** | 0.016 | | RDW‐SD* | 0.280** | 0.152** | −0.046 | −0.026 | 0.088** | 0.080** | 0.057 | 0.067* | | RDW‐CV | 0.037 | 0.074* | −0.017 | −0.025 | −0.033 | 0.060 | 0.051 | 0.106** | | NLR | −0.025 | 0.047 | 0.030 | −0.037 | 0.002 | −0.007 | 0.020 | 0.025 | | PLR | −0.077* | −0.106** | −0.073* | −0.158** | 0.195** | −0.074* | −0.022 | −0.107** | | PMR | −0.101** | −0.126** | −0.059 | −0.070* | 0.204** | −0.087** | −0.034 | −0.054 | | MHR | 0.022 | 0.231** | 0.077* | 0.300** | −0.649** | 0.080* | 0.053 | 0.335** | | NHR | −0.038 | 0.224** | 0.101** | 0.268** | −0.579** | 0.058 | 0.054 | 0.371** | Genders had different accuracy of WBC, Neutrophil, Lymphocyte and Monocyte in predicting MetS. The accuracy of WBC was higher for females (AUC = 0.632; $p \leq .001$; $95\%$ confidence interval [CI]: 0.594–0.669) than for males (AUC = 0.564; $$p \leq .074$$; $95\%$ confidence interval [CI]: 0.512–0.615) (Table 4). **TABLE 4** | Unnamed: 0 | WBC (×103/μL) | Neutrophil (×103/μL) | Lymphocyte (×103/μL) | Monocyte (×103/μL) | | --- | --- | --- | --- | --- | | Male | Male | Male | Male | Male | | AUC (95% CI) | 0.564 (0.512–0.615) | 0.566 (0.514–0.617) | 0.503 (0.451–0.555) | 0.549 (0.497–0.600) | | Optimal cut‐off point | 6.34 | 3.41 | 2.37 | 0.43 | | Sensitivity (%) | 68.29 | 57.32 | 56.10 | 93.9 | | Specificity (%) | 46.74 | 56.70 | 51.89 | 17.87 | | Youden index | 0.150 | 0.140 | 0.080 | 0.118 | | p Value | .074 | .065 | .943 | .154 | | Female | Female | Female | Female | Female | | AUC (95% CI) | 0.632 (0.594–0.669) | 0.608 (0.569–645) | 0.609 (0.566–0.642) | 0.570 (0.532–0.609) | | Optimal cut‐off point | 6.15 | 3.67 | 2.36 | 0.44 | | Sensitivity (%) | 70.05 | 50.25 | 52.28 | 77.16 | | Specificity (%) | 52.92 | 71.0 | 63.93 | 36.93 | | Youden index | 0.230 | 0.212 | 0.162 | 0.141 | | p Value | <.001 | <.001 | <.001 | .003 | The cut‐off value of WBC was 6.1 (×103/μL), the sensitivity was $70\%$, the specificity was $52.9\%$ for females, the cut‐off value of WBC was 6.3 (×103/μL), the sensitivity was $68.2\%$, the specificity was $46.7\%$, for males. Neutrophil for males (AUC = 0.566) and WBC for females (AUC = 0.632) had better accuracy in predicting MetS compared to other parameters (Table 4) (Figures 2 and 3). **FIGURE 2:** *Areas Under the ROC curve (AUC) for WBC, Neutrophil, Lymphocyte and Monocyte in predicting metabolic syndrome for males.* **FIGURE 3:** *Areas Under the ROC curve (AUC) for WBC, Neutrophil, Lymphocyte and Monocyte in predicting metabolic syndrome for females.* ## DISCUSSION We found that MetS affected the haematological parameters of the patients, including WBC and its subcomponent cell count, RDW, PLR, MHR and NHR. In our study, WBC and its subcomponent cells count had a significant positive correlation with the severity of MetS, especially in females. Our results were in parallel with previous studies, which had reported a significant difference in the WBC and its subcomponent cells, between participants with or without MetS. 11, 16, 19 Yang et al. reported that the number of total leukocyte‐related parameters were elevated in individuals aged 60 years or above. 20 Ahmadzadeh et al. demonstrated that MetS components were significantly correlated with WBC and its subcomponent cells count. 16 In Hedayati et al. study in western Iran, the means of WBC count in the MetS group were significantly higher than the control group. 21 Consistent with our data in a study by Chen et al., contrary to the platelet‐related parameters, the WBC‐related parameters had significant changes in patients with MetS. 22 *In a* study on a total of 100 healthy subjects and 200 patients with MetS, total leukocyte and neutrophil counts were significantly increased in all groups of MetS patients compared to the healthy group. 23 Insulin resistance and chronic inflammation are associated with metabolic syndrome by synthesizing some cytokines leading to an increase in WBC and its subcomponent cells count. 24 Lorenzo et al. observed an association between the increased risk of diabetes and elevated WBC, neutrophil and lymphocyte counts due to insulin resistance/sensitivity mechanism. 25 In addition, the relationship between higher levels of WBC count and higher BMI values has been observed in both sexes. 26 *In this* study, it was found that in the group of patients with MetS, women had greater BMI, higher cholesterol, PLT and platelet‐related ratios, and men had a history of more smoking and opium consumption, higher BP, HB, HCT, RBC, monocytes and monocytes‐related ratios. Consistent with our results in another study, it has been determined that the predominant feature of MetS in women was abdominal obesity and impaired lipid profile, and in men, it was high BP and impaired lipid profile. 27 In our study, RDW had a significant positive correlation with the severity of MetS, especially in females; however, this correlation was not observed in mean platelet volume (MPV). So far, minimal studies have been done in this field. Consistent with our data, Farah et al. indicated that both RDW and MPV markers increased as the severity of MetS increased. 28 Abdel‐Moneim et al. found higher levels of MPV in MetS patients. 23 Zhao et al. demonstrated that MPV was inversely related to MS in women. 29 In another study, no significant difference in the MPV between those with and without MetS was observed. 16 In our study, MHR and NHR had a significant positive correlation with the severity of MetS. A recent study demonstrated that NHR and Lymphocyte to HDL ratio (LHR) were significantly correlated with the prevalence of MetS; also, the correlation was more profound in females. 30 A recent study showed that both MHR and NHR were significantly increased in patients with nascent MetS. 31 Considering that monocyte is an indicating factor for inflammatory conditions and atherosclerosis, 32, 33 some studies revealed that the ratio of MHR is a suitable predictor to determine the development and severity of MetS and cardiovascular events. 34, 35 According to our result, Neutrophil to Lymphocyte ratio (NLR) was not recognized as a MetS predictive factor. Ryder et al. observed no association between NLR and obesity or insulin resistance. 36 Contrary to our results, it was found in two studies that patients with MetS had a higher NLR. 23, 37 Liu et al. relieved that the risk of MetS increased with increasing NLR, and NLR was mentioned as a factor for predicting the development of MetS. 38 In addition, this ratio has been mentioned as a predictive factor for diabetes in obese individuals. 39 PLR had a significant negative correlation with all metabolic components except DBP in this study. In another study, it was reported that the amount of PLR in patients with MetS was higher than in patients without MetS, and the amount of PLR had a significant positive correlation with C‐reactive protein (CRP) levels. 40 In Abdel‐Moneim et al. study, the PLR was significantly higher in all patients with MetS than in healthy subjects. 23 The cut‐off points for WBC and its subcomponent cell counts are used to determine the potential risk of developing MetS. The cut‐off value of WBC was 6.1 (×103/μL) and 6.3 (×103/μL) for females and males, respectively, in our study. Our results are confirmatory of previous study findings. Pei et al. reported a cut‐off point of 5.6 (×103/μL) for men and 5.8 (×103/μL) for women, 41 and De Oliveira et al. reported a cut‐off point of 7.5 (×103/μL) for men and 5.6 (×103/μL) for women. 42 ## Limitations Our study had multiple limitations. Firstly, this is a cross‐sectional study, and the analysis of the causative effects was not performed. Secondly, the study sample size was small, and the number of males was smaller than females. Further, our study population included people from southeastern Iran, Kerman; we cannot generalize our results to the whole Iran population. ## Future directions The measurement of haematological parameters is easily available in most parts of the world. However, unfortunately, in public health policies, these parameters do not have a place in the diagnosis and follow‐up of patients with MetS, which will cause these patients to be missed and impose a lot of financial and social costs on the global health system. The results of our study can b an incentive to conduct prospective studies that will lead to the inclusion of haematological parameters in the diagnostic criteria of MetS. Measuring these haematological parameters is cost‐effective and convenient and facilitates screening patients suspected of MetS and their follow‐up. Prospective studies are required to explain the causality effects between MetS and haematological parameters, confirm our data and evaluate the need to change the risk assessment criteria for MetS. ## CONCLUSION The higher levels of WBC and its subcomponent cell count, RDW, MHR and NHR could redict an increased chance of developing MetS, regardless of gender differences. WBC also correlated with MetS components, such as WC, FPG, TG, HDL, SBP and DBP; these parameters are easy to access in patients. Considering that no study has been done on this topic in the population of Southeast Iran, our findings provide additional evidence for using these markers for the early detection of MetS components, which ultimately improves the existing clinical practice in identifying and following MetS patients. ## AUTHOR CONTRIBUTIONS Mohammad Javad Najafzadeh: Conceptualization (equal); investigation (equal); writing – original draft (equal); writing – review and editing (equal). Amir Baniasad: Conceptualization (equal); formal analysis (equal); methodology (equal); writing – review and editing (equal). Reza Shahabinejad: Data curation (equal); investigation (equal); software (equal); writing – original draft (equal). Mahdieh Mashrooteh: *Formal analysis* (equal); investigation (equal); methodology (equal); software (equal). Dr Hamid Najafipour: Conceptualization (equal); investigation (equal); methodology (equal); project administration (equal); writing – review and editing (equal). Mohammad Hossein Gozashti: Conceptualization (equal); investigation (equal); methodology (equal); project administration (equal); writing – original draft (equal); writing – review and editing (equal). ## FUNDING INFORMATION The Kerman University of Medical Sciences funded this research project (Reg. No. 95000008). ## CONFLICT OF INTEREST The authors declare that they have no conflict of interest. ## ETHICAL APPROVAL The study protocol was reviewed and approved by the ethics committee of the Kerman University of Medical Sciences (ethic code: IR.KMU.REC.1395.775). Informed consent was obtained from all participants in the study. ## DATA AVAILABILITY STATEMENT The data supporting this study's findings are available from the corresponding author upon reasonable request.
# Levothyroxine therapy, calculated deiodinases activity and basal metabolic rate in obese or nonobese patients after total thyroidectomy for differentiated thyroid cancer, results of a retrospective observational study ## Abstract Because of controversial issues about the optimal T4 replacement dose in obese hypothyroid subjects and the great importance of thyroid hormones in energy homeostasis, glucose and lipid metabolism, body composition and resting energy expenditure (REE), we compared the correlation between L‐T4 administered dose, thyroid hormone levels and TSH secretion with estimated basal metabolic rate (BMR) and total deiodinase activity (GD) in obese and nonobese athyreotic subjects. We aimed to define individualized set points that might provide appropriate therapeutic and biochemical targets to be clinically tested in obese and nonobese patients. ### Introduction Therapy for hypothyroid obese patients is still under definition since the thyrotropin‐stimulating hormone (TSH) level is a less reliable marker of euthyroidism than nonobese patients. Indeed, TSH levels positively correlate with body mass index (BMI), and this increase may be a compensatory mechanism aimed at increasing energy expenditure in obese people. In contrast, the correlation of BMI with thyroid hormone levels is not completely clear, and conflicting results have been obtained by several studies. The L‐T4 replacement dose is more variable in obese hypothyroid patients than in nonobese patients, and a recent study indicated that the L‐T4 replacement dose is related to lean body mass in obese thyroidectomized patients. We aimed to study the correlations of L‐T4‐administered dose, thyroid hormone levels and TSH secretion with basal metabolic rate (BMR) and total calculated deiodinase activity (GD) in obese and nonobese athyreotic patients. We also looked for individualized L‐T4 replacement dose set points to be used in clinical practice. ### Methods We studied retrospectively 160 athyreotic patients, 120 nonobese and 40 obese. GD was calculated by SPINA Thyr 4.2, the responsiveness of the hypothalamic/pituitary thyrotrope by Jostel's thyrotropin (TSH) index and BMR by the Mifflin‐St. Jeor formula, the interplay of GD and BMR with L‐T4, thyroid hormones and TSH index (TSHI) was also evaluated. ### Results In our study, the L‐T4 dose was an independent predictor of GD, and approximately $30\%$ of athyreotic patients under L‐T4 therapy had a reduced GD; FT4 levels were higher and negatively modulated by BMR in obese athyreotic patients respect to nonobese, in these patients a T4 to T3 shunt, in terms of TSHI suppression is observed suggesting a defective hypothalamic pituitary T4 to T3 conversion and a resistance to L‐T4 replacement therapy. ### Conclusions L‐t4 dose is the most important predictor of GD, BMR modulates T4 levels in obese athyreotic patients that are resistant to L‐T4 replacement therapy. ## INTRODUCTION Levothyroxine (L‐T4) therapy has a long record in clinical use, with a defined pharmacological profile and safety in hypothyroidism management. Obesity and thyroid disorders are common among the general population and may be associated with both clinical and molecular aspects. This relationship has become epidemiologically relevant in the context of the significantly increased prevalence of obesity worldwide. However, treatment for obese patients with subclinical or overt hypothyroidism is still under definition regarding both the threshold and modality (liquid L‐T4 vs. pills; L‐T4 monotherapy vs. liothyronine [L‐T3]/L‐T4 combinations). The prerequisite for treatment with L‐T4 is the presence of hypothyroidism, and the goal is restoration of euthyroidism. Achievement of a thyrotropin‐stimulating hormone (TSH) value within the age‐adjusted euthyroid range is the accepted therapeutic target, as several studies indicate improvement in symptoms, quality of life and cardiovascular risk. 1, 2, 3, 4 However, among euthyroid subjects, TSH levels usually correlate with body mass index (BMI), being higher in obese than in normal subjects. 5 TSH elevation in obese euthyroid people may be a compensatory mechanism in the pituitary‐thyroid axis aimed at increasing energy expenditure. 6, 7 At variance with TSH, the correlation between BMI and thyroid hormones (T4 and T3) is not clear, as several studies obtained conflicting results. Some studies indicate that BMI is negatively related to FT4 and positively related to FT3. Another study, by contrast, indicated hyperactivation of the pituitary‐thyroid axis with increased FT4 levels in obese patients. 8, 9 Other studies describe a decreased FT4/FT3 ratio in obese patients. 5, 10, 11, 12 This adaptation of thyroid hormone homeostasis in obese subjects has been attributed to leptin and insulin actions. 3 The observation of higher TSH and lower FT4 in obese euthyroid people is in accordance with increased L‐thyroxine replacement dose in hypothyroid obese patients. L‐T4 replacement therapy is approximately 1.6 μg/kg in hypothyroid patients with any functional thyroid tissue, while in obese patients, the correct T4 replacement dose is more variable. Recently, the American Thyroid Association (ATA) task force identified obesity as a morbid condition implying an increase in the L‐T4 replacement dose because of reduced thyroid hormone absorption. 2 This observation is reinforced by the evidence that in obese subjects, acute overload of L‐T4 administration takes longer to achieve a plasmatic concentration peak in comparison with nonobese people. 1 However, a recent study indicated that in obese thyroidectomized patients, the L‐T4 replacement dose is positively related to lean body mass. Indeed, the ideal body weight (IBW) should be preferred to real body weight (RBW) for L‐T4 dose titration because lean body mass results in a better predictor of T4 requirement than fat mass. 6, 7 Because of these controversial issues about the optimal T4 replacement dose in obese hypothyroid subjects and the great importance of thyroid hormones in energy homeostasis, glucose and lipid metabolism, body composition and resting energy expenditure (REE), 10, 12, 13, 14 we compared the correlation between L‐T4‐administered dose, thyroid hormone levels and TSH secretion with estimated basal metabolic rate (BMR) and total deiodinase activity (GD) in obese and nonobese athyreotic subjects. Moreover, we aimed to define individualized set points that might provide appropriate therapeutic and biochemical targets to be clinically tested in obese and nonobese patients. ## Patients We retrospectively evaluated 1150 thyroidectomized patients referred to our outpatient thyroid clinic between 2010 and 2015 who were also subjected to 131I ablation because of differentiated thyroid cancer (DTC). In all patients, thyroglobulin levels were between 0.01 and 0.5 ng/ml, and antithyroglobulin antibody (TgAb) was negative. In this cohort, devoid of functional thyroid tissue, all circulating T4 levels originated from levothyroxine replacement therapy. These patients obtain circulating T3 from the conversion of exogenous T4 and represent an ideal model to study peripheral tissue ability to generate biologically active hormones. We excluded from the analysis patients with hypothalamic/pituitary, gastric, intestinal or neurological diseases and pregnant women ($$n = 72$$) and those who were taking combined T3/T4 thyroid replacement therapy and/or other drugs interfering with thyroid hormone homeostasis ($$n = 198$$). Patients with variations in L‐T4 daily dose, body weight and thyroid hormone level fluctuations within 3 months before the start of the study were also excluded ($$n = 720$$). Finally, 160 athyreotic patients under L‐T4 therapy were included in the analysis (Figure 1). All patients were euthyroid on the basis of their TSH, FT4 and FT3 levels within the normal range. **FIGURE 1:** *Flow chart of study patients selection* ## Phenotypic evaluation of the study patients Clinical records included a detailed history, physical examination, standardized questionnaire documenting sex, age, height, weight and BMI. BMI was calculated as weight in kilograms divided by the square of height in metres (kg/m2) and considered a categorical variable according to the World Health Organization (WHO). Obesity was defined as BMI ≥ 30, which is an adequate indicator of obesity and is associated with increased body fat mass. In our study, 120 patients had a BMI ≥ 30, while 40 had a BMI < 30. ## Basal metabolic rate (BMR) evaluation We evaluated BMR by the Mifflin‐St. Jeor formula (MSTF). The MSTF equation is commonly used in the assessment of basal metabolism and is more particularly used in obese patients. The MSTF was also applied differently to female and male sex as follows: Females = 9.99 × weight (kg) + 6.25 × height (cm) − 4.92 × age (years) − 161.Males = 9.99 × weight (kg) + 6.25 × height (cm) − 4.92 × age (years) + 5. We studied the effect of the L‐T4 replacement dose on thyroid hormone homeostasis, estimated BMR and total deiodinase activity (GD) in obese and nonobese patients. Data were collected from patients after thyroidectomy, 131I administration and a persistent euthyroid state under replacement therapy for approximately 3 months with any significant change in L‐T4 dose administration, daily caloric intake and body weight. A subgroup of 45 patients maintaining the same replacement dose over the last 6 months was also studied to better evaluate the interplay between the L‐T4 administered dose and total GD in the long term. ## Evaluation of stimulated deiodination (GD) GD, which reflects the maximum stimulated activity of deiodination, was calculated by SPINA Thyr 4.2 (Structure Parameter Inference Approach by Johannes W. Dietrich, Lab XU44, Bergmannsheil University Hospitals, Ruhr University of Bochum, D‐44789 Bochum, NRW, Germany), which is a mathematical tool for the integrated interpretation of laboratory results. SPINA allows calculation of GD from TSH, FT4 and FT3 serum levels obtained from routine laboratory assays. The method is based upon mathematical/cybernetic modelling of processing structures. 15 In particular, the SPINA algorithm is based on equilibrium analysis of a compartmental nonlinear model: GD = β31Km1+FT41+K30TBGFT3α31FT4, where β31 is the clearance exponent for T3, Km1 is the dissociation constant of type 1 deiodinase, K30 is the dissociation constant of T3 at thyroxine‐binding globulin, and α31 is the dilution factor for triiodothyronine. On the basis of several studies, normal values of calculated GD vary between 21 and 26 nmol/s. 15, 16 Hence, a GD < 21 nmol/s is considered low. ## Responsiveness of the hypothalamic/pituitary thyrotrope We also assessed the responsiveness of the hypothalamic/pituitary thyrotrope by Jostel's thyrotropin (TSH) index: (JTSHI) = ln([TSH]) + β[FT4] and obtained a standardized TSH index (TSHI) = JTSHI − $\frac{2.7}{0.676}$ for statistical comparison. ## Laboratory measurements Serum TSH was assessed by an ultrasensitive enhanced chemiluminescence immunoassay (ECLIA) assay. Serum hormones were measured by microparticle enzyme immunoassay (Abbot AxSYM‐MEIA) with interassay coefficients of variation of less than $10\%$ over the analytical ranges of 1.7–46.0 pmol/L for FT3, 5.15–77.0 pmol/L for FT4 and 0.03–10.0 mU/L for TSH. The within‐run and between‐run precisions for the FT3, FT4 and TSH assays showed coefficients of variation <$5\%$. Measurement of antithyroglobulin antibodies (TgAbs) by an automated chemiluminescence assay system (AntiTg, Ready Pack). Thyroglobulin levels were measured with a second‐generation chemiluminescent Tg immunoassay (Tg Access; Beckman Coulter) with a functional sensitivity of 0.1 ng/ml. ## Statistical analysis Statistical analysis was performed using the SPSS package (IBM SPSS Statistics for Windows, Version 26.0. IBM Corp). For the descriptive analysis, continuous variables were expressed as the mean ± standard deviation (SD) or median (with its 25th–75th percentile); categorical variables were expressed as numbers and percentages. Univariate analysis of variance (ANOVA) was performed to identify predictive variables significantly associated with the clinical outcome. The shapes of the distribution of each variable were evaluated by visual inspection of the population pyramid charts; for distributions of similar shapes, we reported the medians, and for distributions of different shapes, we reported average ranks. The Mann–Whitney U test was used to analyse the continuous variables without a normal distribution. Categorical variables were analysed by the Chi‐square test, if cells with fewer than five expected cell numbers were found, by Fisher's exact test. Complete and partial bivariate analysis was used to evaluate no categorical variables, and Pearson's coefficient was computed. Binary logistic regression analysis was performed for the outcome variables. Covariates were selected on the basis of the results of univariate analysis, and the final model was built using forced entry and a hierarchical method. Linearity of the continuous variables with respect to the logit of the dependent variable was assessed by the Box‐Tidwell procedure, and a Bonferroni correction was applied using all terms in the model to assess its statistical significance. Multicollinearity was excluded after checking tolerance and variance inflation factor statistics and the proportion of the variance of each predictor's b value attributed to each eigenvalue. The ability of the model to discriminate between outcome categories was investigated in more detail by elaborating the ROC curve. This analysis was performed for LT4 × week/BMR ratio vs deiodinase activity on the basis of the regression outputs. Youden's best cut‐off was also calculated, and the greater values were chosen to balance the better sensitivity and specificity for the studied variable. ## Univariate analysis Structure parameter influence assay (SPINA) revealed that GD was reduced in $\frac{50}{160}$ ($31.2\%$) of the thyroidectomized patients (Table 1). **TABLE 1** | Age (years) | 44.6 (13.9) | | --- | --- | | Sex (F/M) | 117/43 | | Weight (kg) | 73.1 (18.3) | | Height (cm) | 163.6 (12.2) | | BMI (kg/Height2) | 27.1 (6.1) | | TSH (mU/L) | 1.6 (0.4–2.9) | | FT4 (pmol/L) | 14.1 (11.6–21.9) | | FT3 (pmol/L) | 4.1 (2.1–5.4) | | FT3/FT4ratio | 0.25 (0.05) | | BMR (Kcal/24 h) | 1419.1 (265.5) | | LT‐4 × week/BMRr (μg) | 0.6 (0.4–1.4) | | LT‐4 × week (μg) | 835.7 (238.7) | | GD (nmol/s) | 24.1 (12.0–40.0) | | GD < 21 nmol/s (n/%) | 50/31.2 | | TSHI | 2.0 (0.0–3.9) | Patients were divided into two groups according to normal (≥21 nmol/s) or low (<21 nmol/s) GD. Sex, age, BMI and BMR were not different between the two groups (Table 2). Univariate analysis revealed that FT3 and the FT3/FT4 ratio were significantly reduced in patients with low GD compared to patients with normal GD ($p \leq .004$–.0001). However, in low GD TSHI, FT4, LT‐4 weekly cumulative dose (LT‐4 × week) and the ratio between LT‐4 weekly cumulative dose and basal metabolic rate (LT‐4 × week/BMR) were significantly increased ($p \leq .0001$) (Table 2). **TABLE 2** | Unnamed: 0 | GD activity <21 nmol/s (n = 50) | GD activity ≥21 nmol/s (n = 110) | p | | --- | --- | --- | --- | | Age (years) | 42.9 (17.3) | 45.2 (12.1) | 0.6 | | Sex (F/M) | 34/16 | 83/27 | 0.2 | | Weight (kg) | 73.3 (20.2) | 73.1 (17.5) | 0.9 | | Height (cm) | 162.9 (16.9) | 163.9 (9.5) | 0.7 | | BMI (kg/Height2) | 26.8 (6.7) | 27.1 (5.6) | 0.6 | | TSH (mU/L) a | 0.8 (0.1) | 1.0 (0.1) | 0.4 | | FT4 (pmol/L) a | 18.1 (2.0) | 14.2 (2.6) | 0.0001 | | FT3 (pmol/L) a | 3.7 (0.1) | 4.3 (0.0) | 0.0001 | | FT3/FT4 ratio a | 0.20 (0.05) | 0.30 (0.05) | 0.0001 | | BMR (Kcal/24 h) a | 1435.1 (40.3) | 1409.8 (24.7) | 0.4 | | LT‐4 × week/BMRr (μg/BMR) | 0.6 (0.2) | 0.5 (0.1) | 0.01 | | LT‐4 × week (μg) | 910.2 (259.2) | 802.5 (222.3) | 0.006 | | GD (nmol/s) | 18.3 (0.3) | 26.6 (0.3) | 0.0001 | | TSHI a | 2.2 (0.1) | 1.7 (0.1) | 0.004 | ## Binary logistic regression analysis and ROC curve Variables reaching statistical significance by univariate analysis were then analysed by binary logistic regression analysis models. LT‐4 × week/BMR was independently and inversely related to GD [B = −3.88, wald = 7.6, $R = 0.021$ (0.001–0.329; $95\%$ confidence interval (CI)), $$p \leq .006$$], FT3 levels were directly and independently related to GD [$B = 2.81$, wald = 25.1, $R = 17.4$ (5.6–53.4, $95\%$ CI) $$p \leq .0001$$]. In contrast, BMR, BMI, body weight, TSH and FT4 were not independently related to GD. To evaluate the effect of LT4 × week/BMR on GD, we used a classic receiver operating characteristic (ROC) model that was very well validated by the study of area under the curve (AUC) = 0.81 ± 0.073 (0.66–0.95, $95\%$ CI, $$p \leq .001$$). To better define the cut‐off of LT‐4 dose beyond which GD was reduced, we researched the best cut‐off of Youden's statistic (YS). YS = 60 indicates that LT‐4 × week/BMR > 0.56 mcg × week/kcal is a good predictor of suppressed GD with sensitivity = $83\%$ and specificity = $77\%$ (e.g. a total of 144 mcg of LT‐4 daily dose reduces GD in patients with 1800 kcal/die estimated BMR). ## Linear regression, complete or partial bivariate analysis with calculation of Pearson coefficient FT3 and FT4 were increased in obese patients compared with nonobese patients ($$p \leq .07$$ and $$p \leq .01$$, respectively), while GD and LT‐4 × week/BMR were similar in the two groups (Table 3). **TABLE 3** | Unnamed: 0 | Non‐obese (n = 120) | BMI ≥30 < 35 (n = 20) | BMI ≥35 (n = 20) | p | | --- | --- | --- | --- | --- | | Sex m/f | 28/92 | 8/12 | 7/13 | | | Age (years) | 43.4 (14.5) | 46.7 (11.5) | 48.3 (11.1) | 0.1 | | Weight (kg) | 65.5 (12.2) | 89.3 (10.8) | 101.6 (16.4) | 0.0 | | Height (cm) | 163.1 (12.7) | 167.9 (10.5) | 160.9 (8.8) | 0.4 | | BMI (weight/[height]2) | 24.3 (3.2) | 31.6 (1.3) | 39.7 (4.9) | 0.0 | | TSH (mU/L) | 0.8 (0.05) | 1.0 (0.2) | 1.3 (0.2) | 0.6 | | TSHI | 1.7 (0.8) | 2.1 (0.7) | 2.2 (0.2) | 0.1 | | FT4 (pmol/L) | 15.6 (2.4) | 16.7 (3.2) | 17.7 (4.2) | 0.01 | | FT3 (pmol/L) | 3.9 (0.6) | 4.1 (0.6) | 4.3 (0.5) | 0.07 | | FT3/FT4 ratio | 0.25 (0.04) | 0.24 (0.05) | 0.24 (0.05) | 0.1 | | BMR | 1336.8 (216.6) | 1633.4 (254.8) | 1688.1 (258.4) | 0.0 | | GD (nmol/s) | 24.1 (4.8) | 23.7 (5.5) | 24.4 (6.1) | 0.2 | | GD < 21 (nmol/s) n/% | 36/30 | 7/35 | 7/35 | 0.3 | | Lt4 × week/BMR | 0.6 (0.1) | 0.6 (0.1) | 0.6 (0.1) | 0.3 | | Lt4 × week (μg) | 779.2 (214.1) | 1006.9 (270.3) | 1002.0 (173.1) | 0.0 | Partial bivariate analysis revealed that FT4 levels were positively related to BMI and negatively related to BMR after subtraction of the BMI effect: $$p \leq .01$$ and $$p \leq .02.$$ Pituitary thyreotropic activity, evaluated by TSHI, was positively related to BMI and LT‐4 × week/BMR: $R = 0.13$, $$p \leq .05$$, $R = 0.14$, $$p \leq .03$$ and inversely related to GD: $$p \leq .0004$$ (Figure 2). FT4 levels were positively related to TSHI in both obese and nonobese patients. In obese patients, the FT4 to TSHI increment was 3.3 times greater than the increment in nonobese patients: $0.1\%$ versus $0.03\%$, R 2 =.24, $$p \leq .001$$ versus R 2 =.04, $$p \leq .02$$ (BMI ≥ 30 vs. BMI < 30) (Figure 3). **FIGURE 2:** *Linear correlation of SPINA GD (nmol/s) with TSHI* **FIGURE 3:** *(A and B) Correlation between TSHI and FT4 in obese and non‐obese athyreotic patients* In obese patients ($$n = 40$$), FT3 levels were inversely related to TSHI; a TSHI increment of 1 unit was related to an FT3 decrement of $0.095\%$: R 2 =.1; $$p \leq .045.$$ In contrast, FT3 levels were not related to TSHI variations in nonobese patients: R 2 =.002, $$p \leq .58$$ (Figure 4). **FIGURE 4:** *(A and B) Correlation between TSHI and FT3 in obese and non‐obese athyreotic patients* These data confirm that the feedback sensitivity of thyroid hormones with the pituitary is significantly different in obese and nonobese patients. ## DISCUSSION Several lines of evidence indicate that hypothyroid patients under levothyroxine replacement therapy may present impaired T3 production and a reduced T3/T4 ratio. 13 The T3 pool derived from intrathyroidal conversion is absent and fails to maintain normal FT3 levels. As a consequence, their peripheral tissues may be underexposed to circulating T3. Our previous data indicate that $29.6\%$ of levothyroxine‐treated athyreotic patients have a reduced FT3/FT4 ratio, and this percentage may progressively increase with increasing replacement levothyroxine dose. 17 These changes may be due to an imbalance between central and peripheral deiodinase activity that may disrupt thyroid hormone homeostasis in this subset of hypothyroid patients. 12, 13, 14 In our study, we evaluated the total deiodinase activity (GD) by the SPINA cybernetic model. 15, 16 We found that our athyreotic patients with impaired GD received a larger dose of LT‐4 and had increased FT4 and TSHI levels, while the FT3/FT4 ratio and FT3 levels were reduced (all <0.0001). GD was reduced in $31.2\%$ of study patients, confirming our previous report since GD is well correlated with the FT3/FT4 ratio 17 (Table 2). To better evaluate the interplay between GD, BMR and LT‐4 weekly cumulative dose (LT‐4 × week), we evaluated the ratio between LT‐4 × week and basal metabolic rate (LT‐4 × week/BMR) calculated by the formula of Mifflin St.‐Jeor. By this tool largely used to evaluate BMR in obese patients, 18, 19 we demonstrated that total GD activity was independently and inversely related to LT‐4 × week/BMR. According to this view, we analysed a subgroup of 45 patients with a stable LT‐4 dose, caloric intake and level of thyroid hormones for almost six months, and we found that a LT‐4 × week/BMR value of 0.56 mcg × week/Kcal can predict the impairment of GD (<21 nmol/s) with good sensitivity and specificity ($$p \leq .01$$). To our knowledge, this is a new finding with a possible clinical implication in athyreotic patients receiving LT‐4 substitutive therapy. Interestingly, estimated BMR, BMI, age and sex were similar between the patients with normal or reduced GD, suggesting that LT‐4 dose and FT3 production are the two independent stronger predictors of GD. Cross‐sectional and longitudinal studies comparing post‐ and presurgical levels of L‐T4 prove that higher L‐T4 doses are associated with the suppression of deiodinase activity. 16 FT4 and FT3 were higher in our obese (BMI ≥ 30) than in nonobese patients (BMI < 30) ($$p \leq .01$$, $$p \leq .07$$), and TSHI was positively related to BMI and LT‐4 × week/BMR and inversely related to GD. However, GD and LT‐4 × week/BRM were not different between obese and nonobese patients, suggesting that BMI is not an independent determinant of GD. The pituitary thyrotropic activity, expressed by the relationship between TSHI and thyroid hormone levels, was different between nonobese and obese patients. TSHI suppression was constantly exerted by increasing levels of FT4 in nonobese patients, while this suppression was significantly attenuated at higher levels of FT4 in obese patients, suggesting increased hypothalamic–pituitary resistance in response to increased T4 levels. The increment of FT4 for each unit of TSHI increment was significantly higher in obese patients than in nonobese patients ($$p \leq .04$$) (Figure 3). However, in accordance with the FT4 results, increasing levels of FT3 constantly suppressed TSHI in nonobese patients, while this suppression was increased at increasing levels of FT3 in obese patients (Figure 4). This T4 to T3 shunt, in terms of TSHI suppression observed in obese patients, suggests a defective hypothalamic pituitary T4 to T3 conversion. Moreover, FT4 levels were positively related to BMI as well as to T4 dose but only partially and inversely related to BMR when BMI effect was subtracted (Pearson, $$p \leq .01$$). Considering that FT4 levels in athyreotic patients are entirely dependent on LT‐4 adsorbed dose and on the extent of T4 degradation, 17 this finding unravels a role of BMR on the modulation of FT4 bioavailability both in nonobese and in obese patients, those with greater lean body mass that leads to increased BMR. 6, 7, 8 Differently than some recent studies, 20 we did not evidence a statistically significant correlation of GD with BMR, however differently than the others studies we evaluated patients athyreotic by total thyroidectomy and 131I ablation, this might contribute to increase the severity of suppression of the feedback loop and the ability to relay type 1 and type 2 allostatic load to T3 production. Moreover, we did not evaluate separately free fat mass and lean body mass. Under normal conditions, thyroid hormones and TSH are inversely correlated, while in patients with resistance to thyroid hormone, higher thyroid hormone levels correspond to high TSH levels due to a possible condition of resistance to FT4, such as in obese patients. 9, 10, 21, 22 One study demonstrated that deiodinase ubiquitination was an important factor in restoring euthyroidism. Indeed, the ubiquitin proteasome system in the hypothalamus of obese mice fails to maintain adequate function. Hence, a defective function of the ubiquitin proteasome system, resulting in deiodinase imbalance, might play a major role in the regulation of the response to thyroid hormones in obese subjects. 9, 21, 22, 23 Thyroid hormone action is modulated by the hypothalamic pituitary thyroid axis, 6 and cell membrane transport, tissue deiodination and degradation and thyroid hormone metabolism in the liver may play an important role. 9, 22, 23, 24 Metabolism of exogenous substrates in the liver occurs by enzymes that either modify and/or conjugate the functional groups to endogenous substrates to increase their solubility to be readily eliminated. Approximately half of obese subjects display several abnormalities in liver enzymatic activity due to steatosis. 25, 26, 27 In particular, increasing BMI and thyroid hormone receptor β are inversely correlated with different stages of nonalcoholic fatty liver disease (NAFLD), 28, 29 which, in turn, is related to decreased multidrug resistance protein (MRP2) activity in the liver. This condition is associated with alterations in the expression and function of enzymes and transporters resulting in an altered glucuronoconjugation of thyroid hormones. 23 However, our study is descriptive and does not allow any direct evaluation of mechanistic insights related to T4 activation, degradation and stability. ## CONCLUSIONS Approximately one‐third of athyreotic patients under LT‐4 replacement therapy have reduced GD. GD activity is inversely and independently related to LT‐4 dose and FT3 levels. We found that an LT‐4 weekly cumulative dose of 0.56 mcg/kcal was an independent predictor of reduced GD, while sex, age, BMI or BMR were not. FT4 levels are higher in athyreotic obese patients, who therefore appear more resistant to LT‐4 replacement therapy. Indeed, FT4 is positively related to BMI and inversely related to BMR, which, in turn, negatively modulates the FT4 increment, especially in patients with elevated body lean mass. Other metabolic pathways both centrally and perimetrically might be involved in FT4 and FT3 degradation. ## AUTHOR CONTRIBUTIONS Pasqualino Malandrino: Data curation (equal); validation (equal). Marco Russo: Data curation (equal); validation (equal). Dario Tumino: Data curation (equal); validation (equal); visualization (equal). Tommaso Piticchio: Data curation (equal); formal analysis (equal); visualization (equal). Adriano Naselli: Data curation (equal); formal analysis (equal); software (lead); supervision (equal); writing – review and editing (equal). Valentina Rapicavoli: Data curation (equal); resources (equal). Antonino Belfiore: Conceptualization (equal); funding acquisition (equal); methodology (equal); project administration (equal); resources (equal); supervision (equal); validation (equal); visualization (equal); writing – review and editing (equal). Francesco Frasca: Conceptualization (equal); funding acquisition (equal); investigation (lead); methodology (equal); project administration (equal); resources (equal); supervision (equal); validation (equal); visualization (equal); writing – review and editing (equal). Rosario Le Moli: Conceptualization (equal); data curation (equal); formal analysis (equal); investigation (equal); methodology (equal); project administration (equal); software (equal); supervision (equal); validation (equal); visualization (equal); writing – original draft (equal); writing – review and editing (equal). ## CONFLICT OF INTEREST The authors declare no conflict of interest. ## INSTITUTIONAL REVIEW BOARD STATEMENT The studies involving human participants were reviewed and approved by Ethics Committee Garibaldi Nesima Hospital ‐ Catania. ## INFORMED CONSENT Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. ## DATA AVAILABILITY STATEMENT The data presented in this study are available on request from the corresponding author.
# Association between Pro‐oxidant‐Antioxidant balance and high‐sensitivity C‐reactive protein in type 2 diabetes mellitus: A Study on Postmenopausal Women ## Abstract Serum PAB, hs‐CRP concentration, and lipid profile were significantly different between postmenopausal women with and without diabetes mellitus. These differences may contribute to the development of coronary complications. ### Introduction Oxidative stress known as a predictive marker for cardiovascular and metabolic diseases could be measured through pro‐oxidant antioxidant balance (PAB). The present study aimed to evaluate PAB and its association with high‐sensitivity C‐reactive protein (hs‐CRP) in the serum of postmenopausal women with diabetes mellitus. ### Methods In this case–control study, 99 diabetic and 100 healthy postmenopausal women without diabetes mellitus were recruited. Serum PAB values, hs‐CRP, lipid profile, insulin, and vitamin D levels were measured. Moreover, insulin resistance (HOMA‐IR, HOMA‐β and QUICKI), waist circumference (WC), waist‐to‐hip ratio (WHR), waist‐to‐height ratio (WHtR), and body mass index (BMI) were calculated. ### Results Serum PAB, hs‐CRP, insulin resistance, HOMA‐β, QUICKI, low‐density lipoprotein cholesterol (LDL‐C), high‐density lipoprotein cholesterol (HDL‐C), and triglycerides (TG) levels were significantly higher in the postmenopausal women with diabetes mellitus, while there was no significant difference in the total cholesterol (TC), serum insulin, WC, WHR, WHtR and vitamin D levels between the groups. Pearson correlation coefficient showed that HDL‐C and insulin levels were directly correlated with serum PAB. Also, there was a significant direct relationship between LDL‐C and insulin levels and hs‐CRP. There was no meaningful relationship between serum insulin and vitamin D levels and other assessed parameters. Backward logistic regression showed a positive relationship between diabetes mellitus and serum PAB and an inverse relationship with serum HDL levels. ### Conclusions Serum PAB, hs‐CRP concentration, and lipid profile were significantly different between postmenopausal women with and without diabetes mellitus. These differences may contribute to the development of coronary complications. ## INTRODUCTION An imbalance between the production of oxidants and their scavengers leads to oxidative stress (OS). OS may also stimulate the production of inflammatory factors, such as high‐sensitivity C‐reactive protein (hs‐CRP). Hs‐CRP is an inflammatory marker induced via cytokines, especially interleukin‐6 (IL‐6). OS and hs‐CRP are predictive markers of cardiovascular disease (CVD) and metabolic diseases, including type II diabetes. 1, 2, 3 Type II diabetes mellitus (T2DM) is a severe, multifactorial and metabolic disease, which affects women more than men in many countries. T2DM increases CVD risk by 2–3 folds, which leads to a higher mortality rate than in non‐diabetic people. 4 Moreover, OS increases in menopausal women, which is associated with loss of ovarian follicular function and oestrogen (E2) production because E2 has antioxidant activity. 5 After menopause, the production of antioxidants is reduced, and OS increases. 6, 7 Thus, menopause may be a risk factor for OS, CVD, osteoporosis, and diabetes. Although OS and inflammation are well established in postmenopausal women, there are limited studies about pro‐oxidant‐antioxidant balance (PAB), CRP levels and their association with insulin resistance in diabetic postmenopausal women. 1, 8, 9 We sought to assess the serum PAB values using a modified PAB assay to measure the pro‐oxidant burden and antioxidant capacity. This study also evaluated hs‐CRP and whether serum PAB values are associated with hs‐CRP in diabetic postmenopausal women. ## Study groups This case–control study was carried out on 99 postmenopausal women who had recently been diagnosed with only diabetes type II and attended the Women's Health Research Center in Gorgan, Iran. The control group included 100 healthy participants age‐matched to the patient group recruited between January 2017 and June 2018 for routine check‐ups. This group consisted of postmenopausal women with no diabetes. Clinical history and other relevant data were collected from all participants. They were excluded if they had taken vitamin supplements, hormones, anti‐inflammatory drugs, and fish oil capsules. Moreover, smokers and pregnant subjects were excluded from the study. Those suffering a myocardial infarction (MI), acute infection or any acute illnesses were excluded. One hundred and 99 subjects met the inclusion/exclusion criteria. They were informed about the study protocol, written consent was obtained from each participant and the research was approved by the Mashhad University of Medical Sciences Ethics Committee (NO: IR.MUMS.REC.1399.533). ## Anthropometric parameters and blood collection After overnight fasting, 5 ml of venous blood was drawn into EDTA and plain tubes, centrifuged at 2500 rpm for 15 min at room temperature, and serum was allocated to several microtubes and stored at −70°C until analysis. Furthermore, body weight, height, waist circumference (WC), and hip circumference (HC) were measured to calculate the waist‐to‐hip ratio (WHR), waist‐to‐height (WHtR), and body mass index (BMI) (kg/m2). ## Biochemical analysis processing Fasting glucose and lipid profile indices, including total cholesterol (TC), triglyceride (TG), and HDL‐C, were measured by enzymatic methods and commercial kits using the BT‐3000 Auto‐analyser (Biotechnica). Moreover, LDL‐C was indirectly evaluated in participants with the Friedewald formula. The levels of insulin were assessed using commercial kits using a radioimmunoassay from the Immuno Nuclear Corporation (Stillwater). Insulin resistance was calculated using the HOMA equation: HOMA‐IR = [Fasting insulin (μIU/ml) fasting glucose (mM/L)]/22.5. Also, homeostasis model assessment of β‐cell function (HOMA‐β) and quantitative insulin sensitivity check index (QUICKI) were used to assess β‐cell function and insulin sensitivity, respectively, as follows: HOMA‐β: (fasting plasma insulin [μU/ml] * 20)/(fasting blood glucose [mmol/l] – 3.5) and QUICKI: 1/(log fasting blood glucose [mmol/l] + log fasting plasma insulin [μU/ml]). Furthermore, serum 25‐hydroxyvitamin D [25(OH) D] levels were assessed using a commercial ELISA kit (25‐Hydroxyvitamin D ELISA kit; Immuno Diagnostic Systems). ## Measurements of hs‐CRP The PEG (polyethylene glycol)‐enhanced immuno‐turbidometry method and commercially available kits on an Alcyon® analyser (Abbott) were used to measure hs‐CRP levels. ## Assessment of PAB Serum PAB values were measured in all subjects as previously described by Alamdari et al. 10 In the first step, we added horseradish peroxidase enzyme and chloramine‐T as oxidizing agents to TMB. Redox index resulted in the combined activity of a colour cation (by oxidants) or reduced to a colourless compound (by antioxidants). In standard solutions, various proportions ($0\%$–$100\%$) of 250 μM hydrogen peroxide (as an oxidizing substance) were mixed with 3 mM uric acid (in 10 mM NaOH) (as an antioxidant). The absorption of 10 μl samples was measured with an enzyme‐linked immunosorbent assay (ELISA) reader at 450 nm for the reference, 630 nm, and the values of PAB were expressed in arbitrary (Hamidi. Koliakos [H.K]) units). ## Statistical methods The normality of the data was assessed by the Kolmogorov–Smirnov test. The mean and SD (for normal distribution) and median and interquartile range (IQR) (for non‐normal distribution) were used to describe the study variables. The independent student t‐test (for variable normality distribution) was used to compare the mean of study variables between case and control groups. A logistic regression method was used to determine the variables related to diabetes, including age, BMI, PAB, systolic blood pressure (SYSp), diastolic blood pressure (DIAp), GLUCOSE (Glc), insulin, InsulinR, TC, LDL, HDL, TG, hs‐CRP and vitamin D. Based on the Hosmer–Lemeshow method, simple logistic regression was utilized to determine the relationship between study variables and diabetes. Then, the variables with $p \leq .2$ were added to the final model and analysed using multiple logistic regressions. We used SPSS for Windows software (version 18 software package SPSS Inc). A p‐value less than.05 was considered statistically significant. ## Participants' characteristics and demographic findings All data showed a normal distribution. Demographic data, including age, BMI, SYSp and DIAp, were not significantly different between the two groups. Except for serum TC, insulin, vitamin D, WC, WHR and WHtR, other laboratory findings in diabetic subjects were significantly different from the non‐diabetic subjects ($p \leq .05$). Table 1 shows the features of the two groups. **TABLE 1** | Variables | Diabetic (n = 99) | Control (n = 100) | p‐value | | --- | --- | --- | --- | | Age (y) | 65.33 ± 5.34 | 61.20 ± 5.990 | .283 | | BMI (kg/m2) | 26.6 ± 2.0 | 26.2 ± 2.4 | .253 | | SYSp (mmHg) | 13.03 ± 1.16 | 13.22 ± 1.20 | .951 | | DIAp (mmHg) | 7.49 ± 0.66 | 7.53 ± 0.75 | .729 | | Glc (mmol/L) | 198.72 ± 69.78 | 94.87 ± 5.68 | <.001* | | TC (mg/dl) | 152.56 ± 10.77 | 150.19 ± 10.68 | .122 | | LDL (mg/dl) | 144.72 ± 33.27 | 131.70 ± 31.43 | .005* | | HDL (mg/dl) | 46.03 ± 9.03 | 49.07 ± 10.10 | .026* | | TG (mg/dl) | 166.19 ± 37.01 | 155.59 ± 26.76 | .022* | | Vit D (ng/ml) | 19.29 ± 10.58 | 19.63 ± 8.12 | .798 | | PAB (H.K) | 0.40 ± 0.29 | 0.22 ± 0.13 | <.001* | | hs‐CRP (mg/dL) | 5.11 ± 6.03 | 2.96 ± 3.07 | .002* | | Insulin R** (μU/mL × mmol/L) | 4.10 ± 4.22 | 2.63 ± 2.52 | .003* | | HOMA‐ β (%) | 1.09 ± 0.89 | 1.85 ± 1.38 | <.001* | | QUICKI | 0.32 ± 0.03 | 0.36 ± 0.04 | <.001* | | Insulin (μU/ml) | 9.33 ± 6.31 | 8.45 ± 6.48 | .335 | | WC (cm) | 95.49 ± 7.19 | 94.31 ± 10.3 | .352 | | HC (cm) | 102.54 ± 7.18 | 101.64 ± 5.88 | .337 | | WHR | 0.93 ± 0.07 | 0.93 ± 0.11 | .767 | | WHtR | 0.56 ± 0.04 | 0.55 ± 0.06 | .717 | ## PAB values, hs‐CRP concentration and insulin resistance among postmenopausal women Serum PAB levels in the diabetic subjects were significantly higher than in the control group ($p \leq .001$) (Table 1). Also, serum hs‐CRP concentrations were statistically different in the two groups ($$p \leq .002$$) (Table 1). Unsurprisingly, in diabetic women, there was a statistically significant difference in insulin resistance, HOMA‐β and QUICKI compared to non‐diabetic women (all $p \leq .05$), whereas no considerable difference was demonstrated between diabetic patients and healthy participants in serum insulin concentrations ($$p \leq .335$$). ## The relationship between serum PAB values, BMI, and hs‐CRP concentrations and other laboratory parameters As shown in Table 2, the Pearson correlation coefficient analysis was performed to evaluate the correlation between serum PAB values, BMI, hs‐CRP concentrations and other laboratory parameters. Scatter plots graphically showed a strong and positive uncorrected association between serum PAB values and hs‐CRP levels ($r = .258$ and $$p \leq .010$$) (Figure 1). We did not find any significant correlation between PAB values and insulin resistance ($r = .095$ and $$p \leq .347$$) (Figure 2). Moreover, serum PAB and hs‐CRP levels were positively correlated with serum insulin ($r = .212$, $$p \leq .035$$; $r = .211$, $$p \leq .037$$), respectively. Among the other study factors, a significant association was observed between serum PAB values and LDL‐C levels ($r = .209$, $$p \leq .038$$) and a negative correlation with HDL‐C levels (r = −0.224 and $$p \leq .026$$). Moreover, a comparison of the relationship between BMI and other values showed a significant correlation between BMI and TG levels ($r = .207$ and $$p \leq .042$$). In addition, we did not find any association between vitamin D levels and other laboratory parameters listed in this study. ## Multiple logistic regressions Logistic regression in the backward approach explained that InsulinR (OR: 1.16, p: 0.012), cholesterol (OR: 1.033; p: 0.047) and LDL‐C (OR: 1.017; p: 0.002) levels, and PAB values (OR: 174.89; $p \leq .001$) had a positive association with diabetes mellitus in patients compared to non‐diabetic women (Table 3). Moreover, these results showed that diabetes had an inverse association with HDL‐C (OR: −0.932; $p \leq .001$). **TABLE 3** | Unnamed: 0 | Variable | OR | 95% CI | p‐value | | --- | --- | --- | --- | --- | | Multiple logistic regression (entered approach, pseudo R 2 = .396) | InsulinR | 1.151 | 1.019–1.300 | .024 | | Multiple logistic regression (entered approach, pseudo R 2 = .396) | Cholesterol | 1.032 | 0.998–1.066 | .063 | | Multiple logistic regression (entered approach, pseudo R 2 = .396) | LDL‐C | 1.015 | 1.004–1.026 | .007 | | Multiple logistic regression (entered approach, pseudo R 2 = .396) | HDL‐C | 0.935 | 0.900–0.973 | .001 | | Multiple logistic regression (entered approach, pseudo R 2 = .396) | TG | 1.010 | 0.999–1.022 | .073 | | Multiple logistic regression (entered approach, pseudo R 2 = .396) | Vit D | 0.997 | 0.962–1.033 | .877 | | Multiple logistic regression (entered approach, pseudo R 2 = .396) | PAB | 140.451 | 16.426–1200.97 | <.001 | | Multiple logistic regression (entered approach, pseudo R 2 = .396) | hs‐CRP | 1.064 | 0.971–1.166 | .186 | | Multiple logistic regression (backward approach, pseudo R 2 = .373) | Insulin R | 1.165 | 1.034–1.313 | .012 | | Multiple logistic regression (backward approach, pseudo R 2 = .373) | Cholesterol | 1.033 | 1.01–1.067 | .047 | | Multiple logistic regression (backward approach, pseudo R 2 = .373) | LDL‐C | 1.017 | 1.006–1.028 | .002 | | Multiple logistic regression (backward approach, pseudo R 2 = .373) | HDL‐C | −0.932 | 0.897–0.968 | <.001 | | Multiple logistic regression (backward approach, pseudo R 2 = .373) | PAB | 174.893 | 21.563–1418.518 | <.001 | ## DISCUSSION To our knowledge, this is the first case–control study to report PAB values and investigate the relationship between hs‐CRP levels and PAB values in postmenopausal women with and without diabetes mellitus. The main finding of the present study was the serum PAB and hs‐CRP elevation in diabetic postmenopausal women compared to non‐diabetic cases. This finding is in accordance with earlier studies demonstrating the presence of systemic inflammation in diabetes. The increased level of OS is significantly associated with metabolic parameters in diabetic patients. 11, 12 OS can be induced by inflammation 4, 13; for example, higher concentrations of interleukin‐6 are an important stimulant for the production of hs‐CRP 14 and inflammation can induce the production of free radicals. 15 The present study showed that serum hs‐CRP levels were positively associated with serum PAB values in diabetic women. Moreover, earlier reports support the presence of high OS and hs‐CRP levels in stroke, cardiovascular and beta‐thalassemia patients. 16, 17 *There is* strong evidence of the correlation between inflammation and OS because both factors contribute to the pathogenesis of diabetes. 18 Moreover, diabetic postmenopausal women also had higher levels of blood glucose and HOMA‐IR index. In correlation with previous studies, dysregulated lipid metabolism in diabetics has been reported, which could be attributed to increased lipolysis due to impaired insulin function in adipose tissue. In addition, the accumulation of free fatty acids in the liver leads to the high hepatic synthesis of TGs and results in hypertriglyceridemia. 11, 19 *In this* study, as shown by Barrett‐Connor et al., 20 no relationship was observed in total cholesterol between diabetic and non‐diabetic subjects. We did not find any significant difference between serum hs‐CRP, glucose, TG, LDL‐C levels, and BMI. These results were inconsistent with those of Yang et al. 21 The reason may be due to the menopause subjects and the changes in the oestrogen hormone and its function in the liver. Moreover, parallel to our report, earlier reports have suggested that OS plays a major role in developing insulin resistance. 22, 23 Consistent with many studies, 23, 24 we can suggest that diabetic women have significantly altered lipid profiles than healthy postmenopausal subjects. Contrary to our work, many studies have reported that increased BMI values were strongly associated with hs‐CRP and OS levels. 25 We suggest that independent of BMI, OS may also be an essential determinant of hs‐CRP levels in diabetic people. Therefore, the link between OS and hs‐CRP levels may involve pathways unrelated to BMI. In line with the study by Goodarzi et al., 7 there was no significant difference in BMI between the two groups. Moreover, consistent with Zaman et al., the patient and control groups were overweight but not obese. 26 Overweight women are not necessarily diabetic, and diabetes mellitus is not the only reason for the BMI increase in overweight type 2 diabetics; other factors may be involved. In addition, in line with our study, many studies have shown that people with diabetes also have a low BMI, and some have a very low BMI. 26, 27 On the contrary, unlike some studies, 28 our study found that diabetes mellitus in our diabetic patients was not necessarily dependent on insulin. Therefore, it can be concluded that in people with type 2 diabetes, other factors may have a role in the incidence of diabetes. Hence, it can alter insulin levels in people who have diabetes without a statistically considerable difference from healthy subjects. In contrast to previous literature, 29, 30 our findings demonstrated a positive relationship between serum hs‐CRP and insulin levels because inflammatory markers decrease insulin secretion and signalling in peripheral tissues. Moreover, interleukin‐6 decreases insulin signalling in the liver. 31 In the present study, we found an irreversible correlation between PAB values and HDL‐C levels in line with A. Cagnacci et al. 32 because oxidants can be reduced by the antioxidant enzyme paraoxonase carried by HDL‐C lipoproteins. 33, 34 Moreover, we found a significant relationship between TG levels and BMI. This finding demonstrated that high TG can cause obesity and ultimately increase BMI in diabetic postmenopausal women. Besides, in contrast to the Cardiovascular Health Study and research by Mendall et al., surprisingly, no relationship was found between hs‐CRP levels and BMI in women. Due to this controversy with the prior investigation, we think that diabetes in postmenopausal women can cause these outcomes. Our finding was in agreement with that of Kahn et al., 35, 36 indicating that diabetic postmenopausal women were characterized by insulin resistance. Moreover, it has been noted that insulin has a significantly negative relationship with higher hs‐CRP levels and PAB values. However, in Table 3, PAB values showed a positive correlation with LDL‐C levels and an irreversible association with HDL‐C levels. Therefore, the evidence supporting these results is that HDL cholesterol is the major lipoprotein carrier of antioxidant enzymes, and LDL is the main factor correlated with oxidative markers. Our study had a few limitations. The present work focused only on PAB values. However, several other factors can affect these biochemical parameters in OS, including sex hormones. Another limitation was the small sample size. ## CONCLUSIONS We found significantly higher PAB values in diabetic postmenopausal women. Moreover, we demonstrated that increased hs‐CRP concentrations are strongly associated with PAB values, a reliable OS marker. This finding was independent of BMI and insulin resistance in diabetic postmenopausal women. Measurement of PAB hs‐CRP levels and other biochemical parameters may be a valuable marker for OS and inflammation and a helpful diagnostic factor to prevent injury and develop coronary artery disease. Future studies with larger sample sizes on PAB values and hs‐CRP may lead to the more practical use of these two markers in clinical diagnosis and follow‐up of diseases and better the quality of life for patients. ## AUTHOR CONTRIBUTIONS Hassan Ehteram: Conceptualization (supporting); writing – review and editing (equal). Sara Raji: Data curation (equal); writing – original draft (equal); writing – review and editing (equal). Mina Rahmati: Data curation (equal); writing – review and editing (equal). Hanieh Teymoori: Data curation (equal); writing – review and editing (equal). Samaneh Safarpour: Data curation (equal); writing – review and editing (equal). Nahid Poursharifi: Data curation (equal); writing – review and editing (equal). Mona Hashem Zadeh: Data curation (equal); writing – original draft (equal); writing – review and editing (equal). Reza Pakzad: *Formal analysis* (lead); writing – review and editing (equal). Hossein Habibi: Writing – review and editing (equal). Naser Mobarra: Conceptualization (lead); supervision (lead); writing – review and editing (equal). ## FUNDING INFORMATION This study is funded by Mashhad University of Medical Sciences (Grant No: 981826) ## CONFLICT OF INTEREST The authors declared no conflicts of interest. ## ETHICAL APPROVAL The Ethics Committee of Mashhad University of Medical Sciences approved the study (IR.MUMS.REC.1399.533). ## DATA AVAILABILITY STATEMENT The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
# Assessment of Blood Microcirculation Changes after COVID-19 Using Wearable Laser Doppler Flowmetry ## Abstract The present work is focused on the study of changes in microcirculation parameters in patients who have undergone COVID-19 by means of wearable laser Doppler flowmetry (LDF) devices. The microcirculatory system is known to play a key role in the pathogenesis of COVID-19, and its disorders manifest themselves long after the patient has recovered. In the present work, microcirculatory changes were studied in dynamics on one patient for 10 days before his disease and 26 days after his recovery, and data from the group of patients undergoing rehabilitation after COVID-19 were compared with the data from a control group. A system consisting of several wearable laser Doppler flowmetry analysers was used for the studies. The patients were found to have reduced cutaneous perfusion and changes in the amplitude–frequency pattern of the LDF signal. The obtained data confirm that microcirculatory bed dysfunction is present in patients for a long period after the recovery from COVID-19. ## 1. Introduction The propagation of coronavirus infection, also known as COVID-19, has caused a huge number of illnesses and deaths. To date, there have been more than 650 million confirmed cases of SARS-CoV-2 infection and more than 6 million deaths worldwide (according to the Johns Hopkins University Coronavirus Resource Center). Three years after the first reported cases of SARS-CoV-2 infection, the pandemic is still far from being over. Despite the development and widespread implementation of vaccines and containment measures, COVID-19 still has a significant impact on the lives of millions of people worldwide. Emerging evidence suggests a close link between severe clinical COVID-19 and an increased risk of its vascular complications, such as thromboembolism [1]. Approximately 40–$45\%$ of cases are asymptomatic with SARS-CoV-2, but clinical observations suggest that complications may occur even in the asymptomatic course of the disease [2]. Although COVID-19 was originally considered a respiratory disease, it has now been established that it affects multiple organs and systems, including the cardiovascular system, gastrointestinal system, brain, kidney, liver, skeletal muscle, and skin of infected patients [3,4]. Recently, there is increasing evidence of the negative impact of this disease on the microcirculatory system of the blood [5,6,7]. It is known that SARS-CoV-2 affects the microcirculatory bed, causing edema and damage to endothelial cells, affects the development of microthrombosis, and capillary blockage, and causes a variety of other negative effects [8]. The development of these disorders, in addition to the direct threat to the patient’s life and health, can also be a key factor in the development of long-term consequences of coronavirus infection, significantly reducing the quality of life of patients. Serious concerns are caused by the fact that proinflammatory status and procoagulation activity can remain in patients for a long time after the recovery [9]. Recent observations show that a fairly large proportion of patients who have recovered from a coronavirus infection subsequently suffer long-term effects of the disease [10]. These include symptoms such as weakness, breathlessness, chest, and joint pain, confusion, memory and concentration problems (so-called “brain fog”), mood changes, etc. These and other symptoms can persist for months after the disease itself and significantly reduce patients’ quality of life [11]. These disorders are referred to as “long COVID” or post-COVID syndrome. Current research is largely focused on the acute stage of SARS-CoV-2, but ongoing monitoring of the long-term effects of the disease is also necessary. In this context, the need for research into the rehabilitation of patients after coronavirus infection is clear. There is a significant body of evidence suggesting that cardiovascular complications of coronavirus can also occur in an asymptomatic course [2], making it even more difficult to detect such complications at an early stage. This means that there will be an urgent need for both diagnostic and rehabilitative measures in the next few years for patients who have suffered from this disease. In addition, there are risks of a similar clinical outcome not only with COVID-19 but also with possible future epidemics of respiratory infections. Existing diagnostic methods routinely used in clinical practice do not allow adequate assessment of blood flow at the microcirculatory level. Currently, there is a need to develop new approaches to the diagnosis of microcirculatory disorders occurring in coronavirus infection, as well as to develop strategies for individual therapy and rehabilitation of patients after COVID-19. Despite the widespread prevalence of the disease and the incidence of cardiovascular complications, as well as the proven extensive involvement of microvasculature in pathological processes, only very few papers have been published to date on the noninvasive assessment of blood microcirculation after COVID-19 [12,13,14]. One of the most common and applicable methods for diagnosing the state of the blood microcirculation system is laser Doppler flowmetry (LDF) [15,16]. This method is widely used in the diagnosis of complications of diabetes mellitus [17,18], rheumatic diseases [19], hypertension [20] and a number of other socially important diseases. Over the years, different modifications of the conventional laser Doppler technique had been introduced, including several attempts at developing wearable devices [21,22,23]. In the COVID-19 clinic, the main focus of research using LDF was on studying the dynamic characteristics of blood flow, including the application of functional tests. It has been shown that, during the acute phase of COVID-19, patients demonstrate a reduced vasodilatory response to local heating and reduced microvascular reactivity [24]. The correlations between microcirculatory parameters measured by LDF and laboratory test results of patients during the acute period of the disease were also analysed [25]. Another study using laser speckle contrast imaging technology demonstrated reduced vasodilation in patients with COVID-19 in response to acetylcholine and sodium nitroprusside, which persists for at least 3 months after the disease [26]. We did not find any studies in the English-language literature devoted to spectral analysis of LDF recordings in patients who underwent COVID-19. Since it is known that such analysis provides valuable diagnostic information about the state of systems regulating blood flow, including the nervous system and endothelial function, the present work aimed to fill the gaps in this area. In this context, this work aimed to comprehensively examine the changes in blood microcirculation that occur both in the acute period of the COVID-19 disease and in the long term during rehabilitation procedures. ## 2.1. Experimental Equipment A distributed system consisting of 4 wireless wearable microcirculatory blood flow analysers implementing LDF method “LAZMA PF” (LAZMA Ltd, Russia; in EU/UK this device made by Aston Medical Technology Ltd., UK as “FED-1b”) was used for data recording in this study [27,28,29]. These analysers use VCSEL die chips (850 nm, 1.4 mW/3.5 mA, Philips, The Netherlands) as a single-mode radiation source. The analysers are implemented without optical fibres with direct skin irradiation from a window at the back of the instrument. This allows for avoiding fibre coupling losses as well as decreasing the movement artefacts which are common in fibre-based LDF monitors. The devices operate autonomously on internal battery power and transfer the measured signal via Bluetooth and/or Wi-Fi. The devices also have built-in motion and temperature sensors to eliminate the possible influence of motion artefacts and temperature changes on the recorded signal. When processing motion sensor data, recordings simultaneous with the subject’s movements are identified as potential sources of distortion of the LDF gram and filtered using special software. The appearance of the analysers (left) as well as the options for mounting them on the volunteer’s hands (right) are shown in Figure 1. ## 2.2. Experimental Protocol The present study comprised 2 phases. The first stage involved a dynamic assessment of the processes occurring in the blood microcirculatory system during the acute period of coronavirus infection. During routine daily LDF measurements, an 18-year-old male patient was found to be accidentally infected with SARS-CoV-2 (confirmed by PCR analysis of nasopharyngeal swabs). The patient had not been vaccinated against COVID-19 prior to the study nor did he have previous experience with COVID-19. The measurements were carried out in the supine position, each lasting for 10 min. To record signals, analysers were attached to the pads of the third fingers and big toes, as well as on the dorsal surfaces of the wrists and the inner parts of the upper third of the shins. The positioning and attachment of wearable devices on the patient’s body during the study are shown in Figure 2. The measurements were taken 10 days before the onset of the disease and during 26 days after the recovery. No measurements were taken during the acute phase of the disease (7 days) because of the patient’s poor well-being. A total of more than 170 LDF signals were measured and processed over the entire study period for this patient. The second phase of the study involved the comparison of blood microcirculation parameters measured by LDF in a group of patients undergoing rehabilitation procedures after COVID-19 and a group of conditionally healthy volunteers with no previous history of coronavirus infection. The main group consisted of 23 subjects who had long COVID symptoms for a prolonged period of time after the recovery from an acute coronavirus infection and were undergoing rehabilitation in a private healthcare facility. Three of them had had a severe COVID-19 infection; all the other patients experienced moderate symptoms of COVID-19. Patients in the main group were measured between 1 and 6 months after the recovery. The mean age of the main group was 58±9 years. The control group included 13 conventionally healthy volunteers of a matching age who were measured in 2019 before the pandemic spread, suggesting that the volunteers in the control group had never encountered COVID-19. Volunteers with any history of cardiovascular or other serious chronic diseases affecting the circulatory system were excluded from the study. The study was conducted with the subject in the supine position in a relaxed state and consisted of a 10-min measurement of microcirculation using a wearable LDF device (“LAZMA-PF”). The analysers were attached to the dorsal surface of the forearms at a point 2 cm above the styloid process and on the inside of the upper third of the shins (see Figure 2C,D) as these points proved to be the most informative from the previous stage of the study. Figure 3 shows a diagram of the experimental design of the study. ## 2.3. Data Analysis In the present study, the analysed parameters were the value of the index of blood microcirculation—Im and amplitudes of blood flow oscillations in the different frequency bands corresponding to different mechanisms of microcirculatory blood flow regulation, measured in relative perfusion units (p.u.) [ 30]. The endothelial (Ae) band (0.005–0.021 Hz) reflects the vascular tone regulation due to the endothelium activity, both NO-dependent and independent; the neurogenic (An) band (0.021–0.052 Hz) represents the influence of neural innervation on blood flow; the myogenic (Am) band (0.052–0.145 Hz) corresponds to vascular smooth muscle activity; and respiratory (Ar) and cardiac (Ac) bands (0.145–0.6 Hz and 0.6–2 Hz, respectively) carry information about the influence of heart rate and movement of the thorax on the peripheral blood flow [31,32]. To calculate the amplitude–frequency spectra of the LDF signal, we used a mathematical apparatus of wavelet transform implemented in the software of wireless wearable analysers “LAZMA-PF”. This software performs a continuous wavelet transform using the complex-valued Morlet wavelet as the analysing wavelet. In addition, the parameter of nutritive blood flow (Imn), estimated by a well-known algorithm [33], was calculated. The use of this parameter makes it possible to estimate the distribution of blood flow along capillary and shunt vessels. The statistical analysis of the data was performed in Origin Pro 2021 software. Due to the limited sample size, a non-parametric Mann–Whitney U test was used to check the statistical significance of differences. Values of $p \leq 0.05$ were considered significant. The results are presented as the mean ± SD unless otherwise indicated. ## 3. Results The first phase of the study demonstrated that COVID-19 results in changes in microcirculatory blood flow regulation mechanisms, which can be measured by assessing the spectral characteristics of the LDF signal. The results of the measurements are shown in Table 1. No significant changes were observed in fingers and toes in this measurement. However, there was a general trend towards a decrease in microcirculation after the disease, and also in the magnitude of the nutritive blood flow in the upper extremities. Figure 4 shows box plots of the amplitude of blood flow oscillations for the stages before and after the disease, measured in wrists and shins. A statistically significant decrease in the amplitude of myogenic oscillations was found in the arms after the disease. In the legs, a significant decrease in the amplitudes of respiratory and cardiac oscillations was observed. Similar changes can be traced in the upper extremities, but they do not reach statistically significant levels there. Figure 5 shows the dynamic changes in blood flow oscillations measured in wrists (a) and shins (b). The figures show that COVID-19 causes high-amplitude changes in the magnitude of endothelial and neurogenic blood flow oscillations immediately after the recovery, which probably caused a high variability of these values at the “After” stage and failure to achieve a statistically significant difference in them when there is a trend for their increase after the disease. These changes are especially pronounced in the upper extremities. In the legs, there is a significant drop in the amplitude of the cardiac oscillations immediately after the disease and of the respiratory oscillations one week after the recovery, which also correlates with the results obtained in the upper extremities. The results of the second stage of the experimental study were subsequently analysed. Table 2 presents the data obtained from the second stage of the study. Both upper and lower extremities show significantly lower values of microcirculation and nutritive blood flow. Whisker boxes for these parameters are shown in Figure 6. An increase in overall oscillatory blood flow activity was also noted in both upper and lower extremities, with statistically significant differences in the neurogenic, respiratory and cardiac ranges in wrists. Whisker boxes for the respiratory and cardiac oscillations measured in wrists are shown in Figure 7. ## 4. Discussion In the present work, we obtained experimental data, which confirm the presence of microcirculatory bed dysfunction for a long period after the recovery from COVID-19. The first part of the study, which included daily measurements of one volunteer for 10 days before his disease and almost a month after the recovery, showed that after a month the parameters did not recover to their original values. This stage of the studies revealed a decrease in the myogenic activity of microcirculation in the upper extremities. It is worth noting that the changes in the patterns of peripheral blood flow oscillations in the post-COVID phase have not yet been studied in detail. Myogenic oscillations play an important role in the process of oxygen delivery to biological tissues [34]. A decrease in myogenic oscillations leads to an increase in the dynamic resistance of microvessels and, as a consequence, to a decrease in the nutritive blood flow. Combined with the observed decrease in neurogenic regulatory activity, this change may indicate the activation of blood flow shunt pathways. In addition, some studies show that high temperature can inhibit vasomotion [35,36], so the decrease in myogenic activity revealed in our study may be a consequence of the high body temperature of the patient during the period of the disease. The period immediately after the recovery from COVID-19 in this study was also characterized by decreased values of respiratory and cardiac microcirculatory oscillations in both upper and lower extremities (with significant differences in legs). In this case, dynamic observations show that cardiac fluctuations are reduced immediately after the disease, and respiratory fluctuations change during the week after the recovery. Another interesting observation of this study was the increased amplitude of endothelial oscillations in the post-COVID phase and the dynamics of these changes. Numerous studies demonstrate endothelial dysfunction as one of the main pathogenic mechanisms of COVID-19 [37,38], which can persist for more than 12 months after the recovery. Studies also show that long COVID-19 symptoms, especially nonrespiratory symptoms, are due to persistent endothelial dysfunction [39]. In our work, we observed increased amplitudes of these fluctuations both in the early stages of recovery from the disease and in the later stages (in the second phase of the study), although these differences did not reach a statistically significant level. In a group of patients undergoing rehabilitation after COVID-19, the most interesting observation in the amplitude–frequency spectrum of the LDF signal, in our opinion, was an increase in the amplitude of neurogenic oscillations. A decrease in neurogenic tone leads to the dilation of the arterioles [40,41] and, consequently, the amplitude of cardiac oscillations significantly increases (which we can observe in our study). The lumen size of skin arterio-venous anastomoses (AVA) is regulated exclusively by neurogenic mechanisms, so we can assume that they also expand amidst the decrease of neurogenic tone. The dilation of AVA leads to arterio-venous shunting of the blood bypassing the capillary channel, which explains the significant decrease of Imn, a decrease of the number of functioning capillaries [13,14], reduction of perfusion (Im) and venular overflow due to arterial blood discharge that in its turn leads to the dilation of venules [40,41] and a significant increase of the amplitude of respiratory-driven blood flow oscillations amplitude. ## Study Limitations The present study was conducted on a small group of patients, some of whom had comorbidities, so there is no certainty that the results will be true for the broader study population. The data obtained, however, should be taken into account for the development of new diagnostic criteria in assessing the degree of microcirculatory disturbances and rehabilitation processes in recently recovered patients. There is a need for additional studies with a larger group of patients, including patients with different courses of COVID-19 (mild, moderate, and severe disease). Despite the already three-year history of coronavirus infection and the undoubted advantages of the LDF method for diagnosing microcirculatory disorders, there are almost no studies devoted to spectral analysis of LDF signal in COVID-19 pathology. In this pilot study, we demonstrated the possibilities of laser Doppler flowmetry coupled with the wavelet analysis of the obtained signals to detect microcirculatory disorders in patients who have undergone COVID-19 that makes it a promising tool for future research and assessment of the dynamical changes in microcirculation during the recovery process. ## 5. Conclusions The present work demonstrates the use of laser Doppler flowmetry and peripheral blood flow oscillations analysis to diagnose vascular disorders in patients who have undergone COVID-19 in their early and advanced stages of recovery. Our work demonstrated a significant increase in the amplitude of neurogenic oscillations in the upper extremities of patients undergoing COVID-19, which, as we suggest, may be a factor preceding dilation of arterioles and venules and redirection of microcirculatory blood flow from the nutritive to the shunt pathways. The obtained data show that optical noninvasive technologies have the potential for further application, but more research is needed to fully understand the changes in the mechanisms of blood flow regulation that occur after an infection.
# Combination of Muscle Quantity and Quality Is Useful to Assess the Necessity of Surveillance after a 5-Year Cancer-Free Period in Patients Who Undergo Radical Cystectomy: A Multi-Institutional Retrospective Study ## Abstract ### Simple Summary Although continuous surveillance after a 5-year cancer-free period in patients with bladder cancer who undergo curative surgery is recommended, optimal candidates for continuous surveillance remain unclear. Sarcopenia is associated with an unfavorable prognosis in bladder cancer. We aimed to investigate the impact of low muscle quantity and quality (defined as severe sarcopenia) on prognosis after a 5-year cancer-free period in patients who underwent radical cystectomy. Our results showed that the 10-year recurrence rate after a 5-year cancer-free period was low (approximately $5\%$), and severe sarcopenia was not associated with increased recurrence risk. Moreover, severe sarcopenia was selected as a significant risk factor for mortality unrelated to bladder cancer. Taken together, patients with severe sarcopenia might not need continuous surveillance after a 5-year cancer-free period, considering high mortality unrelated to bladder cancer. ### Abstract Background: *Although continuous* surveillance after a 5-year cancer-free period in patients with bladder cancer (BC) who undergo radical cystectomy (RC) is recommended, optimal candidates for continuous surveillance remain unclear. Sarcopenia is associated with unfavorable prognosis in various malignancies. We aimed to investigate the impact of low muscle quantity and quality (defined as severe sarcopenia) on prognosis after a 5-year cancer-free period in patients who underwent RC. Methods: We conducted a multi-institutional retrospective study assessing 166 patients who underwent RC and had five years or more of follow-up periods after a 5-year cancer-free period. Muscle quantity and quality were evaluated using the psoas muscle index (PMI) and intramuscular adipose tissue content (IMAC) using computed tomography images five years after RC. Patients with lower PMI and higher IMAC values than the cut-off values were diagnosed with severe sarcopenia. Univariable analyses were performed to assess the impact of severe sarcopenia on recurrence, adjusting for the competing risk of death using the Fine-Gray competing risk regression model. Moreover, the impact of severe sarcopenia on non-cancer-specific survival was evaluated using univariable and multivariable analyses. Results: The median age and follow-up period after the 5-year cancer-free period were 73 years and 94 months, respectively. Of 166 patients, 32 were diagnosed with severe sarcopenia. The 10-year RFS rate was $94.4\%$. In the Fine-Gray competing risk regression model, severe sarcopenia did not show a significant higher probability of recurrence, with an adjusted subdistribution hazard ratio of 0.525 ($$p \leq 0.540$$), whereas severe sarcopenia was significantly associated with non-cancer-specific survival (hazard ratio 1.909, $$p \leq 0.047$$). These results indicate that patients with severe sarcopenia might not need continuous surveillance after a 5-year cancer-free period, considering the high non-cancer-specific mortality. ## 1. Introduction Bladder cancer (BC) is the tenth most common cancer worldwide [1]. Radical cystectomy (RC) with pelvic lymph node dissection and urinary diversion remains the gold standard treatment for muscle-invasive and high-risk non-muscle-invasive BC [2]. Continuous surveillance after a 5-year cancer-free period in patients who undergo RC is recommended by the European Association of Urology guidelines [2]. However, late recurrence after RC has been reported to be infrequent [3,4]. Moreover, optimal candidates for continuous surveillance remain unclear. Identifying these candidates may be helpful in the development of individualized surveillance protocols. Sarcopenia is represented by two dysregulation patterns of body composition: loss of skeletal muscle quantity (myopenia) and quality (myosteatosis) [5]. Although sarcopenia has been reported to be associated with unfavorable prognosis in patients who underwent surgical treatment for various malignancies [6,7], the prognostic value of sarcopenia in patients who undergo RC remains controversial [8,9,10,11]. Moreover, no study has investigated its impact on oncological outcomes and non-cancer-specific mortality after a 5-year cancer-free period. Considering the close relationship between sarcopenia and high mortality caused by non-malignant diseases [12,13,14], we hypothesized that patients with low muscle quantity and quality (defined here as severe sarcopenia) might have a high non-cancer-specific mortality; therefore, they might not need continuous surveillance after a 5-year cancer-free period. The aim of the present study was to evaluate the impact of low muscle quantity and quality on recurrence-free survival (RFS) and non-cancer-specific survival after a 5-year cancer-free period in patients with BC who underwent RC. ## 2.1. Ethics Statement This study followed the principles of the Declaration of Helsinki and was approved by the ethics committees of the Hirosaki University Graduate School of Medicine (authorization number: 2019-099-1) and all hospitals included in this study. Written consent was not obtained due to the public disclosure of the study information (opt-out approach). ## 2.2. Patient Selection To include patients who had sufficient follow-up periods (five years or more) after a 5-year cancer-free period, we retrospectively evaluated 431 patients with BC who underwent RC between October 1995 and December 2012 at one academic center and five general hospitals. We excluded 193 patients who experienced local recurrence and/or distant metastasis, died from any cause, or were lost to follow-up within five years after RC and 72 patients who had no information on their heights or digital computed tomography (CT) scans available for body composition analysis. Ultimately, 166 patients were included in this study (Figure 1). ## 2.3. Evaluation of Variables The following variables were analyzed: age, sex, Eastern Cooperative Oncology Group performance status (ECOG PS), hypertension (HTN), diabetes mellitus, history of cardiovascular disease (CVD), chronic kidney disease (CKD), clinical stage, neoadjuvant chemotherapy (NAC), urinary diversion, pathological outcomes, and adjuvant chemotherapy. Age and comorbidities at five years after RC were used in the analyses. Renal function was evaluated by estimated glomerular filtration rate (eGFR) using a modified version of the abbreviated Modification of Diet in Renal Disease *Study formula* for Japanese patients [15] and CKD was defined as eGFR < 60 mL/min/1.73 m2. Tumor stage was assigned according to the 2009 TNM classification system recorded by the Union of International Cancer Control. Tumor grade was classified according to the 1973 World Health Organization classification system. ## 2.4. NAC and Adjuvant Chemotherapy Since September 2004, patients have received two–four courses of NAC, composed of a platinum-based combination regimen of gemcitabine plus cisplatin, gemcitabine plus carboplatin, or methotrexate, vinblastine, adriamycin, and cisplatin. Regimens were selected based on guidelines regarding eligibility for the proper use of cisplatin, overall patient status, and the clinician’s discretion. The cycles were repeated every 21 days. Adjuvant chemotherapy was not routinely administered. Indications for adjuvant chemotherapy included pT4, pathological lymph node involvement, grade 3, lymphovascular invasion, or positive surgical margins in patients who were not treated with NAC. Patients were selected for adjuvant chemotherapy at the clinician’s discretion. We administered one–three courses of adjuvant chemotherapy to patients with a feasible postoperative status for toxic chemotherapy. Adjuvant chemotherapy comprises a platinum-based combination regimen of gemcitabine plus cisplatin, gemcitabine plus carboplatin, or methotrexate, vinblastine, doxorubicin, and cisplatin. ## 2.5. Surgical Procedures RC was performed using a previously described basic technique [16]. Briefly, the patients underwent RC, standard pelvic lymph node dissection, and urinary diversion (orthotopic ileal neobladder construction, ileal conduit diversion, or cutaneous ureterostomy). ## 2.6. Follow-Up Schedule The follow-up schedule after the 5-year cancer-free period comprised annual urine analysis, urine cytology, blood chemistry, and lung, abdominal, and pelvic CT scans. ## 2.7. Evaluation of Muscle Quantity and Quality Muscle quantity was evaluated using the psoas muscle index (PMI). We measured the cross-sectional areas of the right and left psoas muscles on plain CT images at the level of the third lumbar vertebra (L3) five years after RC. The muscles were identified based on their anatomical features, and the bilateral psoas muscle areas were evaluated using manual tracing. The PMI was calculated by normalizing these cross-sectional areas to their height (cm2/m2) [17]. Muscle quality was evaluated based on intramuscular adipose tissue content (IMAC) using L3 level plain CT images five years after RC. We precisely traced the multifidus muscle and subcutaneous fat to measure their CT values (Hounsfield units). IMAC was calculated by dividing the CT value of the multifidus muscles by that of the subcutaneous fat. A higher IMAC indicates a greater amount of adipose tissue in the skeletal muscles and, therefore, lower skeletal muscle quality [18]. Since the ranges of PMI and IMAC in men and women are quite different [19,20], and their optimal cut-off values for mortality in patients with BC have been to be established, their optimal cut-off values for non-cancer-specific mortality were calculated separately for men and women using receiver operating characteristic (ROC) curves. In the present study, we defined patients with both low muscle quantity and quality as patients with severe sarcopenia. Patients were divided into two groups: those with lower PMI and higher IMAC values than their cut-off values (severe sarcopenia group) and those with higher PMI and/or lower IMAC values (control group) (Figure 1). ## 2.8. Statistical Analysis SPSS version 24.0 (SPSS Corp., Armonk, NY, USA), R 4.0.2 (The R Foundation for Statistical Computing, Vienna, Austria), and GraphPad Prism 5.03 (GraphPad Software, San Diego, CA, USA) were used for statistical analyses. Quantitative variables are expressed as medians with interquartile ranges. Differences in quantitative variables between the two groups were analyzed using the Mann–Whitney U test. Categorical variables were compared using Fisher’s exact test or the chi-squared test. RFS, overall survival (OS), and non-cancer-specific survival were evaluated using the Kaplan–Meier method and compared using the log-rank test. Moreover, the cumulative incidences of recurrence was estimated and death before recurrence was defined as a competing risk. The Gray test was performed to compare cumulative incidences between the control and severe sarcopenia groups. Subsequent univariable analyses were performed to assess the impact of severe sarcopenia on recurrence, adjusting for the competing risk of death using the Fine-Gray subdistribution hazards model. Univariable Cox proportional hazards regression analyses were performed to identify the significant factors associated with RFS. Univariable and multivariable Cox proportional hazards regression analyses were performed to evaluate the impact of severe sarcopenia on non-cancer-specific survival. These outcomes were calculated from five years after RC to the date of the first event or last follow-up. Recurrence was defined as local pelvic recurrence, remnant urothelial recurrence, or distant metastasis. Non-cancer-specific mortality was defined as death unrelated to BC. Statistical significance was set at $p \leq 0.05.$ ## 3.1. Patients’ Backgrounds The median age and follow-up period after the 5-year cancer-free period were 73 years and 94 months, respectively. Of the 166 patients, 85 ($51\%$) and 19 ($11\%$) received NAC and adjuvant chemotherapy, respectively. The patients’ backgrounds are summarized in Table 1. ## 3.2. Evaluation of Muscle Quantity and Quality The median PMI values in men and women were 6.18 cm2/m2 and 4.65 cm2/m2, respectively. The optimal cut-off values of PMI for non-cancer-specific mortality in men and women was 5.28 cm2/m2 and 6.35 cm2/m2, respectively. Of the 166 patients, 94 ($57\%$) and 72 ($43\%$) had PMI values higher and lower than the cut-off values, respectively. The median IMAC values in men and women were −0.46 and −0.33, respectively. The optimal cut-off values of IMAC for non-cancer-specific mortality in men and women was −0.49 and −0.04, respectively. Of the 166 patients, 84 ($51\%$) and 82 ($49\%$) had IMAC values lower and higher than the cut-off values, respectively. Patients were divided into two groups: those with lower PMI and higher IMAC values (severe sarcopenia group, $$n = 32$$) and those with higher PMI and/or lower IMAC values (control group, $$n = 134$$) (Figure 1). No significant differences in patients’ background were observed between the two groups, except for age and ECOG PS (Table 1). ## 3.3. BC Recurrence By the end of the follow-up period after the 5-year cancer-free period, nine ($5.4\%$) patients experienced BC recurrence, including recurrence in the upper urinary tract ($$n = 3$$), lymph nodes ($$n = 2$$), urethra ($$n = 1$$), local pelvis ($$n = 1$$), neobladder ($$n = 1$$), and distant metastasis ($$n = 1$$). Of the nine patients who experienced BC recurrence, eight ($6.0\%$) and one ($3.1\%$) patients were in the control and severe sarcopenia groups, respectively. The 5-year and 10-year RFS rates were $95.2\%$ and $94.4\%$, respectively (Figure 2A). Almost all recurrence detection rates ([number of patients with recurrence/number of patients with surveillance during a certain period] × 100) were under $1\%$ throughout the entire follow-up period (Figure 2B). In the Gray test, the cumulative incidence rate of recurrence was not significantly different between the control and severe sarcopenia groups (Figure 2C, $$p \leq 0.528$$). In the univariable analyses, none of the patient factors, clinical stage, perioperative chemotherapy, or pathological outcomes were significantly associated with shorter RFS (Table S1). Similarly, none of lower PMI, higher IMAC, and severe sarcopenia showed significant higher probabilities of recurrence (Figure 2D; subdistribution hazard ratio [SHR] 0.664, $95\%$ confidence interval [CI] 0.168–2.630, $$p \leq 0.560$$; SHR 0.814, $95\%$ CI 0.220–3.010, $$p \leq 0.760$$; SHR 0.525, $95\%$ CI 0.067–4.140, $$p \leq 0.540$$; respectively). ## 3.4. OS and Non-Cancer-Specific Survival By the 5-year, 10-year, and end of the follow-up period after the 5-year cancer-free period, 29 ($18\%$), 51 ($31\%$), and 64 ($39\%$) patients died from any cause, respectively. The main causes of death during the entire follow-up period were other malignancies ($25\%$) and CVD ($22\%$), followed by infectious diseases ($14\%$) (Figure 3). Of the 64 patients who died from any cause, six ($9.4\%$) died of BC (Figure 3). The OS in patients with lower PMI values was significantly shorter than that in patients with higher PMI values (Figure 4A, $$p \leq 0.003$$). The OS in patients with higher IMAC values was significantly shorter than that in patients with lower IMAC values (Figure 4B, $$p \leq 0.025$$). The OS in the severe sarcopenia group was significantly shorter than that in the control group (Figure 4C, $p \leq 0.001$). By the end of the follow-up period after the 5-year cancer-free period, 37 ($28\%$) and 21 ($66\%$) patients in the control and severe sarcopenia group died from non-cancer-specific cause, respectively. The 5-year and 10-year non-cancer-specific mortality rates in the severe sarcopenia group were significantly higher than those in the control group (Figure 5A,B; $38\%$ vs. $10\%$, $p \leq 0.001$; $56\%$ vs. $21\%$, $p \leq 0.001$; respectively). The non-cancer-specific survival in patients with lower PMI values was significantly shorter than that in patients with higher PMI values (Figure 5C, $$p \leq 0.001$$). The non-cancer-specific survival in patients with higher IMAC values was significantly shorter than that in patients with lower IMAC values (Figure 5D, $$p \leq 0.010$$). The non-cancer-specific survival in the severe sarcopenia group was significantly shorter than that in the control group (Figure 5E, $p \leq 0.001$). In univariable analyses, age, EGOS PS, HTN, CKD, lower PMI, higher IMAC, and severe sarcopenia were significantly associated with shorter non-cancer-specific survival (Table 2). In multivariable analyses, lower PMI and higher IMAC were not significantly associated with shorter non-cancer-specific survival (HR 1.267, $95\%$ CI 0.696–2.308, $$p \leq 0.439$$; HR 1.377, $95\%$ CI 0.688–2.757, $$p \leq 0.367$$; respectively), whereas severe sarcopenia was significantly associated with shorter non-cancer-specific survival (HR 1.909, $95\%$ CI 1.007–3.619, $$p \leq 0.047$$) (Table 3). Age and CKD were also associated with shorter non-cancer-specific survival (Table 3). ## 4. Discussion To the best of our knowledge, this is the first study to evaluate the impact of low muscle quantity and quality (defined here as severe sarcopenia) on oncological outcomes and non-cancer-specific mortality after a 5-year cancer-free period in patients with BC who underwent RC. The results of the present study showed that the 10-year recurrence rate after the 5-year cancer-free period was low (approximately $5\%$), and severe sarcopenia was not associated with increased recurrence risk. In contrast, severe sarcopenia was identified as a significant risk factor for non-cancer-specific mortality. These results suggest that patients with severe sarcopenia may not need continuous surveillance after a 5-year cancer-free period. Although a prospective validation study with a larger sample size is warranted, these results might be helpful for clinicians to optimize individualized surveillance protocols after a 5-year cancer-free period. In the present study, neither severe sarcopenia nor low muscle quantity or quality was associated with BC recurrence after a 5-year cancer-free period. Although several studies have investigated the impact of preoperative sarcopenia on oncological outcomes in patients who underwent RC [8,9,10,11], to our knowledge, there is no available evidence about its impact on oncological outcomes after a 5-year cancer-free period in both BC and other malignancies. However, our results are consistent with those of previous studies that have focused on preoperative sarcopenia. Smith et al. revealed that sarcopenia evaluated by total psoas area was not associated with worse 2-year survival in 200 patients with BC who underwent RC [10]. Likewise, Wang et al. demonstrated no significant association between sarcopenia and shorter disease-free survival in 112 patients with BC who underwent RC [21]. In contrast, Ornaghi et al. reported opposite results. They conducted a systematic literature review to investigate the impact of sarcopenia on long-term mortality rates in patients with BC treated with RC and revealed that sarcopenia was significantly associated with unfavorable 5-year cancer-specific survival (CSS) (HR 1.73, $p \leq 0.05$) [22]. Similarly, a systematic review and meta-analysis conducted by Hu et al. demonstrated that sarcopenia was associated with poor CSS in patients with BC who underwent RC (HR 1.73, $p \leq 0.001$) [23]. Although we do not have a clear answer about our negative results, these conflicting results might be caused by the varied definitions of sarcopenia between studies due to the lack of international consensus. Because the lack of available evidence and several limitations in the present study, especially the small number of recurrence events, prevent us from making definitive conclusions, further prospective studies with an appropriate sample size and recurrence events are warranted. In the present study, severe sarcopenia (low muscle quantity and quality) was associated with increased non-cancer-specific mortality after a 5-year cancer-free period, whereas low muscle quantity or quality alone had marginal effects. Although many studies have investigated the impact of both low muscle quantity and quality on prognosis in a single study of several malignant and non-malignant diseases [5,24,25,26], the combined effects of these parameters have rarely been reported. Hopkins et al. assessed 968 patients with colorectal cancer who underwent curative resection and demonstrated that both low muscle quantity and low muscle radiodensity were independently predictive of worse OS (HR 1.45 and HR 1.53, respectively), but the presence of both increased the HR for OS (HR 2.23) [27]. Similarly, Caan et al. assessed 1628 female patients with colorectal cancer who underwent surgical resection and revealed that patients with both low muscle quantity and high total adipose tissue area had a higher risk of overall mortality (HR 1.64) than patients with low muscle quantity or high total adipose tissue area alone (HR 1.38 and HR 1.30, respectively) [28]. Although the included patient populations and evaluated outcomes in these studies were different from those in the present study, these results indicate the potential additive effects of low muscle quantity and quality on prognosis in patients with malignancies. It is unclear how other diseases contribute to the mortality of BC survivors after RC. In the present study, only six ($3.6\%$) patients died from BC after the 5-year cancer-free period, whereas 58 ($34.9\%$) died from other causes (Figure 3). Kong et al. reported similar results. They assessed 81,843 patients with BC who survived 5–10 years after treatment ($93.9\%$ were treated with surgery) and demonstrated that only $6.9\%$ of them died from BC while $47.9\%$ died from other causes, including CVD ($11.0\%$), pulmonary disease ($7.7\%$), and other cancers ($3.0\%$) [29]. Moreover, late recurrence after RC has been reported to be infrequent [3,4]. These results indicate that the contribution of other diseases to mortality after a 5-year cancer-free period is much greater than that of BC. The association between sarcopenia and increased mortality caused by CVD and infectious diseases has been reported [12,13,14,30]. Moreover, our results showed a relationship between sarcopenia and increased non-cancer-specific mortality after a 5-year cancer-free period. Taken together, patients with sarcopenia might not need continuous surveillance after a 5-year cancer-free period in patients who undergo RC. The present study had several limitations. First, we were unable to control for selection bias and other unquantifiable confounders in retrospective studies. Moreover, patients without available CT scans for muscle quantity and quality measurements were excluded, which might have caused a selection bias. In addition, skeletal muscle loss is associated with aging and patients in the severe sarcopenia group were significantly older than patients in the control group in the present study (Table 1). Thus, it might have caused an association bias regardless of the adjustment for age in the multivariable analyses. Second, a relatively small number of patients were enrolled, and the number of recurrence events was also small. Moreover, the small number of cancer-specific deaths prevented us from evaluating cancer-specific survival. Third, sarcopenia was assessed by manual tracing, which may have been subject to human error. Finally, given its retrospective nature, we had no information on other frailty metrics, such as walking speed, grip strength, and nutritional status. ## 5. Conclusions Recurrence after a 5-year cancer-free period in patients with BC who underwent RC was infrequent. Severe sarcopenia was not associated with an increased recurrence risk but was associated with non-cancer-specific mortality. Thus, patients with severe sarcopenia may not need continuous surveillance after a 5-year cancer-free period.
# Preparation and Characterization of Intracellular and Exopolysaccharides during Cycle Cultivation of Spirulina platensis ## Abstract The dried cell weight (DCW) of *Spirulina platensis* gradually decreased from 1.52 g/L to 1.18 g/L after five cultivation cycles. Intracellular polysaccharide (IPS) and exopolysaccharide (EPS) content both increased with increased cycle number and duration. IPS content was higher than EPS content. Maximum IPS yield (60.61 mg/g) using thermal high-pressure homogenization was achieved after three homogenization cycles at 60 MPa and an S/I ratio of 1:30. IPS showed a more fibrous, porous, and looser structure, and had a higher glucose content and Mw (272.85 kDa) compared with EPS, which may be indicative of IPS’s higher viscosity and water holding capacity. Although both carbohydrates were acidic, EPS had stronger acidity and thermal stability than IPS; this was accompanied by differences in monosaccharide. IPS exhibited the highest DPPH (EC50 = 1.77 mg/mL) and ABTS (EC50 = 0.12 mg/mL) radical scavenging capacity, in line with IPS’s higher total phenol content, while simultaneously showing the lowest HO• scavenging and ferrous ion chelating capacities; thus characterizing IPS as a superior antioxidant and EPS as a stronger metal ion chelator. ## 1. Introduction Spirulina platensis is a multicellular filamentous cyanobacterium and has been nicknamed the “Edible Queen” by the FAO and the FDA for its nutritional value [1,2]. S. platensis and its derivates have been widely used in dietary supplements and other food products targeted at the health-aware consumer, and are increasingly gaining recognition as functional ingredients [3,4]. Among its most promising derivatives, the polysaccharides of S. platensis (PSP) have attracted significant attention due to their antioxidant, antiaging, antiviral, anti-inflammatory, and immunomodulatory potential, as well as their physicochemical attributes [5,6]. Previous studies have shown that PSP is bioactive and comprises polymeric carbohydrates composed of long chains of monosaccharide units bound together by glycosidic linkages, and that its multiple biological activities are closely related to monosaccharide composition, diverse glycosidic linkages, molecular weight, and spatial configuration [7,8]. Moreover, PSP is interesting for its potential to improve intestinal function and health, and to prevent cancer cell proliferation. Therefore, with the increasing demand for PSP in trade markets, the cost-efficient preparation of PSP is currently a pressing research need. Broadly, PSP can be isolated from the cell bodies of S. platensis and from culture media to obtain intracellular polysaccharides (IPSs) and exopolysaccharides (EPSs), respectively. IPSs consist of complex acid sulphate polysaccharides, and account for 15–$20\%$ of the cell mass of S. platensis cell mass [2,8]. Variations in IPS content are related to culture conditions, including carbon source, light, pH, salinity, and cultivation time. Large-scale extraction of IPS is usually performed by cell disruption, chemical maceration, and enzymatic treatment [9]. On the other hand, EPS is a water-soluble heteropolysaccharide that is secreted during growth and binds tightly to S. platensis cell walls, forming a protective capsule against dehydration and toxic agents [10,11,12]. EPS content in S. platensis cultures is potentiated by high salinity and low nutrient availability [13]. EPS is obtained by ultrafiltration with membranes intended for the appropriate molecular weight. Both IPS and EPS harbor a variety of functional groups, including -OH, -COOH, -SO3H, and -CH3, which, together with the structural diversity of these polysaccharides, are thought to give them bioactive properties such as antifungal and antioxidant activity, free-radical scavenging, and inhibition of lipid peroxidation [14,15,16]. However, little is known about how IPS and EPS content may be affected during cycle cultivation, and distinct structural-functional profiles of IPS vs. EPS are yet to be characterized. Therefore, the aims of the present work are to investigate the influence of S. platensis cycle cultivation on IPS and EPS content, and to characterize the structural-functional properties of IPS and EPS; in particular, their physicochemical properties, monosaccharide composition, molecular weight, functional groups, and antioxidant activity. Additionally, an optimized method for the preparation of IPS and EPS based on thermal water coupled with high-pressure homogenization (HPH) is offered. The data herein described thus provides theoretical and technical resources for the cost-efficient production and adequate application of IPS and EPS from S. platensis. ## 2.1. Cultivation of S. platensis S. platensis strains (FACHB: GY-D18) were purchased from the Institute of Hydrobiology, Chinese Academy of Science, PR China. Batches of S. platensis were cultured in 500 mL Erlenmeyer flasks with 300 mL of modified Zarrouk’s medium for eight days at 25 °C under cycle illumination (5500 lx). After harvesting cells, culture medium was reused to culture a second batch of S. platensis, for a total of five cultivation cycles. Deionized water and analytical grade chemicals and solvents were used in all cases. Erlenmeyer flasks and culture medium were sterilized at 121 °C for 20 min before use. ## 2.2. Evaluation of S. platensis Growth A 20 mL aliquot of cell suspension was filtered through a Whatman filter paper and dried at 105 °C for 24 h. Biomass was calculated according to the method of Zhou et al. [ 17] and expressed as dry cell weight (DCW; g), following the formula:DCW (g/L) = (Wn − W0)/0.02 Growth rate (g/L/d) = (DCWn+1 − DCWn)/1 where Wn corresponds to total dry weight (g) of the filter paper with algae, W0 represents the dry weight (g) of the filter paper alone, 0.02 is the aliquoted volume (L), and n is time (days). ## 2.3. Experimental Design of RSM for Extraction of IPS from S. platensis IPSs were extracted from S. platensis cell bodies using hot water coupled with high-pressure homogenization (HPH; GJJ-$\frac{0.06}{100}$, Shanghai Taichi Light Industry Equipment Co, LTD), according to the experimental design of RSM (Table 1). Extracts were centrifuged at 7104× g for 10 min, and supernatants were concentrated at 80 °C. Concentrated supernatants were treated five times with savage reagent (Chloroform: n-butanol = 4:1) to remove protein, then mixed (1:4) with $95\%$ ethanol and allowed to precipitate for 12 h at 4 °C. Precipitates were suspended in a small aliquot of deionized water before dialysis, and then dialyzed in molecular weight cut-off bags (8–10 kDa) for 48 h to eliminate residual salts. Finally, samples were lyophilized, weighed, and kept for further experiments. The yield of IPS (mg/g) was calculated as follows:Yield (mg/g) = c × n × v / m where c is the concentration (mg/mL), n is the dilution factor, v is the sample volume (mL), and m is the dried sample weight (g). ## 2.4. Extraction of Extracellular Polysaccharide (EPS) EPSs were obtained from S. platensis culture medium according to the method of Li et al. [ 13]. The medium was filtered through a 0.45 μm membrane, and the filtrate was concentrated at 50 °C and then precipitated with ice-cold ethanol (1:4 v/v) at 4 °C for 12 h. Precipitates were dialyzed, lyophilized, weighed, and stored in the same manner as IPS extracts. ## 2.5. Chemical Composition Analysis of IPS and EPS Total sugar and protein content were determined by the phenol-sulfuric acid and the Coomassie brilliant blue methods, respectively [18]. Total phenol content was estimated using the Folin–Ciocalteu reagent and measuring absorbance at 750 nm; gallic acid (20–100 µg/mL) served as standard, based on Chaiklahan et al. [ 8]. Phenol content was expressed in gallic acid equivalents. Ash content was determined based on weight loss after 4 h at 550 °C. The monosaccharide composition of IPS and EPS was determined with a precolumn derivation HPLC method using 1–phenyl–3-methyl-5–pyrazalone (PMP) (Ma et al., 2019). Samples were thoroughly hydrolyzed to monosaccharides by treatment with 4 M trifluoroacetic acid for 8 h at 110 °C, and then mixed with the PMP solution and chloroform. Samples were then analyzed by HPLC (Agilent, Santa Clara, CA, USA) and UV detection at 245 nm. A 4:21 (v/v) mixture of acetonitrile and 0.125 mol/L KH2PO4 was used as the mobile phase at a flow rate of 0.8 mL/min, at 30 °C. The molecular weight (Mw) of IPS and EPS was determined by gel permeation chromatography (GPC, ELEOS System, Wyatt Technology Co., Goleta, CA, USA), based on the method of Zhang et al. [ 10] with slightly modified chromatographic conditions: 0.2 mol/L NaNO3 served as mobile phase at a flow rate of 0.5 mL/min and a column temperature of 25 °C; injection volume was 20 μL. ## 2.6. Fourier-Transform Infrared (FTIR) Spectroscopy and Scanning Electron Microscopy (SEM) Infrared spectra of IPS and EPS were obtained using a FTIR spectrometer (Thermo Scientific Nicolet IS50, MA, USA) according to the method of Sasaki et al. [ 19]. A dried 1.0 mg sample was ground and pressed into tablets mixed with 100 mg of KBr. Tablets were scanned at a wavelength range of 4000-400 cm−1. The surface morphology of IPS and EPS was observed by SEM. The powdered sample was sprinkled on the surface of a piece of double-sided tape which was adhered to the microscope’s aluminum column, and then sputter-coated with platinum powder using an ion sputter coater for observation. ## 2.7. Zeta Potentials and Thermal-Gravimetric (TG) Analysis Sample solutions (1.0 mg/mL) were prepared in ultrapure water. Zeta potentials were determined at 25 °C in the pH range of 2.0–9.0 using a Zeta sizer Nano-ZS particle diameter and potentiometric analyzer (Malvern Instruments, MC, UK). All samples were measured in triplicate. The thermodynamic characteristics of IPS and EPS samples were analyzed by differential scanning calorimetry (DSC) (Netzsch, DSC 214 Polyma, Selb, Germany). A 5.0 mg sample was weighed in an aluminum pan, using an empty pan as reference. Measurements were performed under nitrogen flow (40 mL/min), at a heating rate of 10 °C/min in a range of 30 °C to 800 °C. ## 2.8.1. DPPH Radical Scavenging Assay The scavenging activity of different concentrations of IPS and EPS on 2,2-diphenyl-1-picrylhydrazyl (DPPH) radicals was evaluated according to the method of Su et al. [ 20], with slight modifications. Briefly, 1.0 mL of polysaccharide extract (0–2.5 mg/mL) was thoroughly mixed with 1.0 mL of DPPH solution (0.2 mmol/L in $95\%$ ethanol). The mixture was allowed to react for 30 min, protected from light, and absorbance was then measured at 517 nm with a UV/Vis spectrophotometer. In this step, $95\%$ Ethanol and 0–2.5 mg/mL ascorbic acid were used as blank and positive control, respectively. DPPH radical scavenging activity was calculated with the formula:DPPH radical scavenging ability (%) = (A0 − A1 + A2) × 100/A0 where A0 represents the absorbance of the DPPH solution alone, A1 is the absorbance of the DPPH solution containing the sample, and A2 is the absorbance of the ethanol solution with the sample. ## 2.8.2. ABTS Radical Scavenging Assay Scavenging activity on 2,2′-Azino-bis (3-ethylbenzthiazoline-6-sulfonate) (ABTS) radicals was analyzed as described by Tian et al. [ 6], with some modifications. Equal volumes of an aqueous solution of 7.0 mmol/L ABTS and 2.45 mmol/L K2S2O8 were mixed and allowed to incubate at RT for 12 h, while protected from light, to acquire the ABTS+• solution. This ABTS radical solution was then diluted with phosphate buffer solution (pH 7.4) until reaching an absorbance of 0.70 ± 0.02 at 734 nm. A 100 μL aliquot of the diluted ABTS+• solution was mixed with 100 μL of sample (0.1–2.5 mg/mL), and absorbance was measured at 734 nm after 30 s oscillation. Deionized water and ascorbic acid (0–2.5 mg/mL) served as blank and positive control, respectively. ABTS radical scavenging activity was calculated as follows:ABTS radical scavenging ability (%) = (A0 − A1 + A2) × 100/A0 where A0 is the absorbance of the diluted ABTS+• solution alone, A1 is the absorbance of the diluted ABTS+• solution mixed with the sample, and A2 stands for the absorbance of the sample in deionized water. ## 2.8.3. HO• Radical Scavenging Assay HO• scavenging activity was assayed according to the method described by Ji et al. [ 21] with some modifications as follows: A 1.0 mL aliquot of a sample solution (0–2.5 mg/mL) in $95\%$ ethanol was thoroughly mixed with 1.0 mL of each of the following: 9 mmol/L H2O2, 9 mmol/L FeSO4, and 9 mmol/L salicylic acid. The solution was then incubated at 37 °C for 60 min with cycle-shaking, and absorbance was measured at 510 nm. Ascorbic acid (0–2.5 mg/mL) was used as a positive control. Hydroxyl radical scavenging activity was calculated as follows:HO• scavenging ability (%) = (A0 − A1 + A2) × 100/A0 where A0 is the absorbance of deionized water, A1 is the absorbance of the sample, and A2 is the absorbance of the solution without sample. ## 2.8.4. Fe2+ Chelating Ability Fe2+ chelating ability was determined as described by Chang et al. [ 22], with minor modifications. Briefly, 1.0 mL of sample (0–2.5 mg/mL) was mixed with 3.7 mL deionized water and 0.1 mL of 2.0 mmol/L FeCl2·6H2O solution, vigorously stirred for 30 s, and then 0.2 mL of 5 mmol/L ferrozine solution was added. The mixture was incubated for 10 min at 25 °C and absorbance was measured at 562 nm. Deionized water and sodium ethylenediamine tetra acetic acid (EDTA-Na2) (0–2.5 mg/mL) were used as blank and positive control, respectively. Chelating capacity (%) was calculated with the formula:Chelating ability (%) = (A0 − A1) × 100/A0 where A0 is the absorbance of the reaction solution without sample and A1 is the absorbance of the reaction solution with the sample. ## 2.8.5. EC50 Calculation EC50 represents the mass concentration of the sample when clearance is $50\%$. To calculate EC50 values for DPPH, ABTS, and HO• radical scavenging activity, and for Fe2+ chelating capacity, the clearance ratios of different sample concentrations were plotted and fitted linearly. ## 2.9. Statistical Analysis All the experiments were conducted in triplicate. Data plotting was performed with Design Expert 13, Origin 2021, and IBM SPSS Statistics 26. Analysis of variance (ANOVA) was carried out wherever applicable and $p \leq 0.01$ was regarded as a significant difference. For all figures and tables, data were presented as mean ± std ($$n = 3$$) of the three independent replicates. ## 3.1. Change of S. platensis and Polysaccharide Content during Cycle Cultivation Figure 1 shows the growth curve of S. platensis and the growth in polysaccharide content (IPS and EPS) during cycle cultivation. DCW of S. platensis and polysaccharide content showed a linear increase with prolonged cultivation time. The growth of S. platensis behaved as a parabola, reaching its maximum rate (0.24 g/L/day) on the fourth day of cultivation. After eight days in culture, the DCW of S. platensis reached 1.52 g/L, representing a $660\%$ increase, and appeared as a regular spiral filament under the microscope (Figure 1a). Total polysaccharide content significantly increased with extended cultivation time, with IPS (80.08 mg/L) reaching a concentration three times higher than that of EPS (27.94 mg/L) by the end of cultivation (Figure 1b). As shown in Figure 1c, the DCW of S. platensis gradually decreased from 1.52 g/L to 1.18 g/L with each cultivation cycle. This is likely explained by a decrease in the microalgae photosystem II, as a consequence of the accumulation of dissolved organic matter (DOM) and increased viscosity of the culture medium [13]. IPS content increased together with the number of cycles, which may also be due to DOM accumulation and reduced DCWP, reaching 203.34 mg/L (or a $136\%$ increase) after five cultivation cycles. Meanwhile, EPS content increased to 52.62 mg/L by the second cycle and remained stable in subsequent cycles. Reusing culture media several times is likely to curb the availability of nitrogen and other nutrients, which can in turn contribute to increasing the C/N ratio and thus promote the incorporation of carbon into the EPS fraction [11]. ## 3.2. Single-Factor Test of IPS Extraction With the gradual increase in the solid–liquid ratio, the extraction rate was the first to increase and then decrease (Figure 2a), and when the material–liquid ratio was 1:30 g/mL, the extraction rate could reach 54.30 ± 0.75 mg/g. With the increase in high-pressure homogenization pressure, the extraction rate of intracellular polysaccharide reached the maximum at 60 MPa (Figure 2b), and the extraction rate was 48.13 ± 0.90 mg/g. The highest extraction rate was achieved when the number of extractions was three (Figure 2c), and it was 48.40 ± 0.29 mg/g. ## 3.3. Optimization of IPS Extraction To optimize the extraction procedure of IPS from S. platensis, a total of 17 experiments with three independent variables (A = S/I ratio; B = pressure; C = number of homogenizations) were performed following a Box–Behnken design (BBD) (Table 1). IPS yield ranged from 39.87 to 60.33 mg/g (dry weight) across all 17 experiments. Based on multiple regression analysis on the experimental data, a second-order polynomial equation expressing the relationship between each variable was generated:Yield (%) = −161.98 + 4.97A + 3.17B + 33.46C − 0.11AB + 0.06AC − 0.08BC − 0.07A2 − 0.02B2 − 5.19C2 The results and RSM analysis are presented in Table 2. The F value for the model was 363.16 ($p \leq 0.0001$), indicating that the model was statistically significant. The p value of the linear (A; B; C), interaction (AB; BC), and quadratic term coefficients (A2; B2; C2) were all lower than 0.01, which implied that these variables had significant effects on the extraction yield. The correlation coefficient (R2) was 0.9979, indicating that the predicted and observed values were similar and that the model was a good fit. In addition, the determination coefficient (R2adj) was 0.9951, which indicated that only $0.49\%$ of the total variation could not be captured by the regression model. The p value for lack of fit was 0.1532, which means that lack of fit and pure error were not significantly different. These results thus indicated that the regression model could adequately predict IPS extraction yield. The relationship between independent and response variables and response is visually represented as a 3D surface response (Figure 3a). For S/I ratio and pressure, the projection of 3D response surface at the bottom was elliptical, indicating that the mutual interaction between S/I ratio and pressure was significant. A similar trend was observed for S/I ratio and number of homogenizations, and for pressure and number of homogenizations (Figure 3b,c). The peak point at their response surfaces also simultaneously existed in their minimum elliptical, indicating that there was an extremum value in the chosen range. Based on multiple regression and 3D surface response analyses, the optimal conditions for IPS extraction were predicted as follows: S/I ratio = 1:30.79; pressure = 61.08 MPa; and three homogenizations, for an extraction yield of 60.20 mg/g. A verification experiment was performed under the optimal conditions predicted by the model (S/I ratio = 1:30; pressure = 60 MPa; and three homogenizations). The observed IPS extraction yield was 60.61 mg/g, which was not statistically different from the predicated value. Therefore, the regression model was suitable for the prediction of IPS extraction from S. platensis. ## 3.4. IPS and EPS Composition Table 3 shows the chemical compositions of IPS and EPS. Both IPS and EPS contained more than $65\%$ total sugars and less than $5\%$ protein. The phenolic content in IPS ($7.3\%$) was higher than in EPS, indicating a stronger antioxidant capacity. The carbohydrates present in both IPS and EPS but in different ratios (Figure 4) included mannitol, ribose, rhamnose, glucuronic acid, galacturonic acid, glucose, galactose, xylose, and fucose. However, the proportion of each monosaccharide content was statistically significantly different in IPS vs. EPS (Table 3). IPS’s main monosaccharides were glucose ($83.62\%$), rhamnose ($4.42\%$), fucose ($3.25\%$), and glucuronic acid ($2.39\%$). In comparison, EPS contained mainly fucose ($19.99\%$), rhamnose ($15.61\%$), glucose ($14.75\%$), galacturonic acid ($11.13\%$), and galactose ($10.78\%$) content, and had a lower molecular weight (185.13 kDa). This indicated that IPS may have a higher viscosity than EPS [19]. These differences in monosaccharide content and Mw between PSP fractions point to remarkably distinct functional properties and potential when used as food additives e.g., as thickening stabilizers. ## 3.5. FTIR Spectrum Analysis and SEM Imaging The FTIR spectra of IPS and EPS indicated large similarities in the functional groups contained in both polysaccharide fractions (Figure 5a). The absorption peaks observed at around 3413 and 2925 cm−1 are typical of the O−H and C−H stretching vibrations in rhamnose and fucose, respectively [23]. The amide I band at 1650 cm−1 can be taken to represent the symmetrical and asymmetrical stretching vibration of C=O in COO− and −NHCOCH3, together with the bending vibration in the N-H bond [24]. Similarly, the amide II band with peak absorption at 1542 cm−1 can be mainly attributed to the symmetrical stretching vibration in the C−O bond. Next, absorption peaks at 1400–1200 cm−1 represent variation angle vibrations. The absorption peak at 1240 cm−1 can be attributed to the asymmetrical stretching vibration in −S=O, indicating that both IPS and EPS contained a small number of -SO3H groups [6]. The presence of the pyran ring and the carbohydrate skeleton (C−O−C) is indicated by their characteristic peaks at 1153 and 1064 cm−1, respectively [10]. Finally, the absorption peaks at 898 and 819 cm−1 correspond to the deformation mode of the β−D−pyranoside bond (C−H) and α−Mannitose, respectively [25]. To better understand their physical properties, the surface and microstructure of IPS and EPS was visualized by SEM. IPS and EPS were remarkably different in shape and size (Figure 5b). IPS presented a smooth surface with irregular thin stripes at 2000× magnifications. At 5000× magnifications, IPS exhibited a loose, finely lamellar, and porous web-like structure; these characteristics could imply an enhanced solubility exposure of active groups in IPS. In contrast, EPS had a relatively smoother and flatter surface, and a more coarsely lamellar, less porous structure. Because of its fibrous and porous structure, IPS has likely more versatile application in various foods, and may be especially superior for its water holding capacity compared with EPS [10]. ## 3.6. Zeta Potential and TG Analysis The changes in zeta potential of IPS and EPS solutions in response to pH are shown in Figure 6a. As pH increased from 2.0 to 9.0, the zeta potentials of IPS and EPS decreased from −25.73 to −29.77, and from −26.43 to −37.5, respectively. The smaller differential between IPS and EPS in this pH range may be explained by the high glucose content in IPS. Although both extracts had negative zeta potentials, meaning both of them are acidic polysaccharides, EPS showed a more negative potential than IPS overall. This points to EPS’s stronger acidity which, in agreement with previous reports, is probably due to a higher abundance of −SO3H in EPS. Thermal stability is a crucial physicochemical property for the commercial application of polysaccharides. The TG and derivative TG curves were experimentally determined for IPS and EPS (Figure 6b). Analysis of weight loss revealed three major stages: [1] 50–200 °C; [2] 200–500 °C; and [3] 500–800 °C. Weight loss during the first stage was $4.78\%$ for IPS and $10.11\%$ for EPS, and could be attributed to the evaporation and dehydration of adsorbed and surface water from the polysaccharide’s surface. During the second stage, weight loss was approximately $58.47\%$ (IPS) and $53.03\%$ (EPS), and was possibly due to the degradation of long carbohydrate chains and the depolymerization of fragments. By the third stage, weight loss slowed down, only decreasing by $29.94\%$ (IPS) and $13.71\%$ (EPS), which could also be due to the fact that the remaining compounds were further carbonized and some carbonates were converted into CO2. Maximal weight loss reached $93.84\%$ (IPS) and $77.36\%$ (EPS) at 800 °C. These results showed that IPS and EPS are thermally stable below 220 °C, and that EPS’s thermal stability is higher, possibly as a consequence of its higher fucose and rhamnose content. ## 3.7. Antioxidant Capacity Analysis DPPH is a stable nitrogen−centered radical and is widely used for the in vitro evaluation of the antioxidant capacity of natural products [10]. IPS and EPS showed an overall strong scavenging activity on DPPH radicals (Figure 7a), and this was dependent on the concentration of polysaccharide, reaching its maximum value at 2.5 mg/mL ($65.9\%$ and $44.7\%$ for IPS and EPS, respectively). The EC50 value of IPS (1.77 mg/mL) was lower than that of EPS (4.67 mg/mL). The greater ability to scavenge DPPH radicals of IPS is consistent with its higher phenolic content. For comparison, the DPPH scavenging activities of S. platensis−derived polysaccharides are superior to those derived from other bacteria and microalgae, specifically *Pseudomonas fluorescens* (approximately $30\%$ at 1.0 mg/mL EPS) [26] and *Sargassum carpophyllum* ($66.6\%$ at 12 mg/mL IPS) [6]. Scavenging of ABTS radicals is another common indicator of the antioxidant potential of natural compounds. Both IPS and EPS were shown to be strongly capable of scavenging ABTS radicals in a concentration-dependent manner (Figure 7b). Scavenging activity reached $95.26\%$ at 1.0 mg/mL and $94.47\%$ at 2.5 mg/mL for IPS and EPS, respectively, and these values were not statistically different from the positive control at the same concentration. The EC50 values for ABTS radical scavenging were 0.12 mg/mL (IPS) and 0.60 mg/mL (EPS); this heightened scavenging activity for IPS may be explained by a lower sulphate/sugar content ($p \leq 0.05$). Likewise, PSP extracts showed better ABTS radical scavenging performance when compared with polysaccharides sourced from *Oudemansiella radicata* mushroom (EC50 = at 0.2 mg/mL ORP) [25] and *Botryococcus braunii* (EC50 = 5.13 mg/mL EPS) [27]. HO• is a highly reactive radical known for its deleterious biological effects, including red blood cell death, DNA damage, and cell membrane degradation, and is prominently implicated in ageing [28]. For this reason, scavenging HO• radicals constitutes an important antioxidant defense mechanism. Both IPS and EPS presented scavenging activity on HO• radicals and this also increased with concentration (Figure 7c). The scavenging capacity of PSPs is directly related to the function of electrons and hydrogen, as supported by previous reports [6]. The EC50 of IPS (1.72 mg/mL) was higher than that of EPS (0.75 mg/mL), $p \leq 0.05$; this superior ability of EPS to scavenge HO• radicals may stem from its rich alcohol hydroxyl groups in the structure of fucose. The chelating ability of IPS and EPS on ferrous ions also increased at higher concentrations (Figure 7d). EPS had the strongest chelating capacity ($85.91\%$) at 1.0 mg/mL. Remarkably, this was higher than the positive control’s ($73.90\%$) and IPS’s ($40.84\%$) chelating abilities at same concentration. The EC50 of IPS and EPS were 1.54 mg/mL and 0.38 mg/mL, respectively. Thus, EPS showed a stronger chelating power on ferrous ions, which is probably due to the abundance of COO− and SO42- in EPS. ## 4. Conclusions In these results, the content and functional properties of IPS and EPS were investigated during cycle cultivation of S. platensis. The results showed that the DCW of S. platensis gradually decreased with the increase in number of cycles during cycle cultivation, and the IPS and EPS content gradually increased with the increase in number of cycles and extension of time during cycle cultivation, and IPS content was far higher than EPS. The maximum yield of IPS (60.61 mg/g) could be obtained under the condition of 1:30 S/I ratio and 60 MPa, three times, using thermal-HPH technology. The same carbohydrates were present in both IPS and EPS but in different ratios. IPS has more loose fibrous porous structures, higher glucose, and larger Mw than EPS, indicating higher water holding capacity and viscosity. Both IPS and EPS were shown to be acidic carbohydrates, but the acidity and thermal stability of EPS were stronger than those of IPS, which might be closely related to the monosaccharide content. IPS exhibited a better scavenging capacity on DPPH and ABTS radicals than EPS, possibly due to higher total phenol content, and far lower scavenging ability on OH• radicals and lower ferrous ion chelating ability than EPS, which indicated that IPS showed high antioxidant capacity, but EPS had strong chelating ability on metal ions. These results could provide theoretical direction for the cost-efficient production and adequate application of IPS and EPS from S. platensis as food additives or medicinal ingredients. In future studies, the extraction efficiency of IPS should be improved for large-scale production. Moreover, the rheology, water holding capacity and number of major functional groups of IPS and EPS should also be further analyzed to better understand their functional properties.
# Prediction of Gastrointestinal Tract Cancers Using Longitudinal Electronic Health Record Data ## Abstract ### Simple Summary Cancers of the gastrointestinal tract—including the esophagus, stomach, and intestines—are often diagnosed at an advanced stage, when curative treatments are rare. These cancers can all cause gastrointestinal bleeding, but this often occurs gradually and may be unnoticed by patients. Changes in routine laboratory parameters such as the complete blood count may be able to show these subtle changes prior to clinical presentation or the development of iron deficiency anemia. The aim of our study was to develop models for the prediction of luminal gastrointestinal tract cancers (esophageal, gastric, small bowel, colorectal, anal) using data routinely available within an electronic health record, in a retrospective cohort from an academic medical center. The cohort included 148,158 individuals, with 1025 gastrointestinal tract cancers. We found that longitudinal prediction models using the complete blood count outperformed a single timepoint logistic model for 3-year cancer prediction. ### Abstract Background: Luminal gastrointestinal (GI) tract cancers, including esophageal, gastric, small bowel, colorectal, and anal cancers, are often diagnosed at late stages. These tumors can cause gradual GI bleeding, which may be unrecognized but detectable by subtle laboratory changes. Our aim was to develop models to predict luminal GI tract cancers using laboratory studies and patient characteristics using logistic regression and random forest machine learning methods. Methods: The study was a single-center, retrospective cohort at an academic medical center, with enrollment between 2004–2013 and with follow-up until 2018, who had at least two complete blood counts (CBCs). The primary outcome was the diagnosis of GI tract cancer. Prediction models were developed using multivariable single timepoint logistic regression, longitudinal logistic regression, and random forest machine learning. Results: The cohort included 148,158 individuals, with 1025 GI tract cancers. For 3-year prediction of GI tract cancers, the longitudinal random forest model performed the best, with an area under the receiver operator curve (AuROC) of 0.750 ($95\%$ CI 0.729–0.771) and Brier score of 0.116, compared to the longitudinal logistic regression model, with an AuROC of 0.735 ($95\%$ CI 0.713–0.757) and Brier score of 0.205. Conclusions: Prediction models incorporating longitudinal features of the CBC outperformed the single timepoint logistic regression models at 3-years, with a trend toward improved accuracy of prediction using a random forest machine learning model compared to a longitudinal logistic regression model. ## 1. Introduction Malignancies of the gastrointestinal (GI) tract—including esophageal, gastric, small bowel, colorectal, and anal cancers—are a leading cause of morbidity and mortality, with over 200,000 new diagnoses and approximately 80,000 deaths per year in the United States [1]. While routine screening is recommended for colorectal cancer (CRC), many patients go unscreened, particularly in vulnerable and underserved populations [2]. Recent studies have also noted a rising incidence of CRC in younger patients for whom screening may be impractical or ineffective [3,4,5,6]. As a result, even with lowering the age for initiation of CRC screening to 45 [7,8], existing screening programs for GI tract cancers remain inadequate, and there is no routine screening recommended for GI tract cancers in average-risk adults other than for CRC (e.g., stool testing or colonoscopy). As GI tract cancers often do not present clinically until they are at an advanced stage, early diagnosis is critical for improving outcomes [9,10,11,12,13]. Improved diagnosis could be achieved by leveraging a physiological common link in luminal GI tract cancers: gradual occult blood loss, ultimately resulting in iron deficiency anemia (IDA) [14]. Healthcare providers often obtain complete blood counts (CBCs) as part of routine clinical care [15,16], but clinicians do not always diagnose IDA accurately and may not obtain the recommended diagnostic evaluation of bidirectional endoscopy (esophagogastroduodenoscopy [EGD] and colonoscopy) for patients with new-onset IDA [15,17,18]. One approach that has been described is the use of electronic trigger tools based on concerning patterns in laboratory data such as new-onset IDA [19]; however, they have not been widely adopted in clinical practice. In addition, site-specific prediction models have examined the association between longitudinal changes in CBCs and the diagnosis of CRC [20,21,22], but new models for prediction of occult malignancy within the entire GI tract are needed. Such models could utilize existing longitudinal laboratory data combined with other patient characteristics stored within the electronic health record (EHR) and serve as automated tools to help improve diagnosis. In this paper, we describe the development of models for the prediction of luminal GI tract cancers (esophageal, gastric, small bowel, colorectal, anal) using a single-center retrospective cohort. We developed and compared models using single timepoint logistic regression, longitudinal logistic regression, and longitudinal random forest machine learning. ## 2. Materials and Methods The study was conducted as a single-center, retrospective cohort study of patients receiving care at an academic medical center (Michigan Medicine, Ann Arbor, MI, USA) between 2004–2018. This study was approved with a waiver of informed consent by the University of Michigan Institutional Review Board (HUM00156237), due to the large retrospective nature of the study. Data analysis and model development were performed using SAS 9.4 (SAS Institute, Cary, NC, USA) and Python 3.8 (Python Software Foundation, Wilmington, DE, USA). Elements of the TRIPOD guidelines for transparent reporting of multivariable prediction models were used [23]. Subjects were identified as individuals from the Michigan Medicine Clinical Data Warehouse who had at least 2 CBCs within a rolling 2-year time frame between 1 January 2004 and 31 December 2013. Michigan *Medicine is* a large referral center as well as a primary care system. We used the presence of 2 CBCs to identify patients seeking regular care at Michigan Medicine and to provide at least two data points for a longitudinal prediction model. Subjects were excluded if age < 18, given the low incidence of GI tract cancers and paucity of routine blood draws in a pediatric population. Data were collected from the date of subject’s inclusion until 31 December 2018 (or diagnosis of GI tract cancer), including laboratory values from complete blood counts (CBCs), basic metabolic panels (BMPs), age, sex, self-reported race (as documented in the EHR demographics field), and Body Mass Index (BMI, in kg/m²). Data pre-processing was performed in SAS 9.4 (SAS Institute, Cary, NC, USA), with merging of the demographic, laboratory, biometric, and cancer registry data into a unified file. Biologically implausible laboratory or BMI values were excluded. ## 2.1. Predictor Variables Each model included patients’ demographic variables (age, sex, race), BMI, the individual component variables of the CBC, and the most recent BMP components. We included all the variables from the CBC since subtle changes within laboratory parameters other than hemoglobin or hematocrit may also reflect an iron-deficient state, e.g., elevated red cell distribution width (RDW), low mean cellular hemoglobin (MCH), and low mean cellular volume (MCV) [24,25]. We also included the BMP in these models, which may reflect comorbidities with associated potential links to GI tract cancers, e.g., reported associations between CRC and chronic kidney disease [26] (suggested by elevated blood urea nitrogen and creatinine) and gastric cancer and diabetes [27,28,29] (as might be suggested by hyperglycemia). As machine learning methods can identify patterns that are imperceptible to clinicians, we included all variables from the CBC and BMP, as these methods tend to perform better with additional data rather than making fixed assumptions about the importance of individual predictors. ## 2.2. Primary Outcome The primary outcome was the diagnosis of a GI tract cancer, as determined by linkage to the University of Michigan Cancer Center Registry, which contains confirmed pathologic diagnoses of all cancers diagnosed at Michigan Medicine. We chose this method due to the lack of specificity of International Classification of Diseases (ICD)-$\frac{9}{10}$ codes at differentiating between the time of a diagnosis and the time of documentation in a chart (e.g., potential that a newly charted diagnostic code may reflect documentation of an existing GI tract cancer that occurred many years prior rather than a new diagnosis of cancer). In addition, during the study period, Michigan Medicine updated its EHR system (beginning in 2012), which resulted in overlapping usage of ICD9 and ICD10 codes beginning in 2012. As a result, we selected the cancer registry as it provided a consistent source of confirmed cancers. We limited the outcomes to the diagnosis of luminal GI tract cancers, as defined for this study as cancers of the esophagus, stomach, small intestine, colon, rectum, or anus. Non-luminal GI tract cancers such as pancreaticobiliary cancers or liver cancers were excluded from this analysis as we were primarily interested in potential effects of occult GI tract bleeding, as might be reflected by changes in the CBC. For individuals with GI tract cancers within the cancer registry, we used the date of the diagnosis as the individual’s final observation. For individuals with no GI tract cancer within the registry, we used the date of the last recorded CBC to define the end of the observation period. ## 2.3. Model Development We used three techniques of prediction model development: [1] single timepoint multivariate logistic regression; [2] multivariate logistic regression incorporating summarized longitudinal features; and [3] random forest machine learning incorporating longitudinal features. For each prediction technique, we developed a prediction model for diagnosis of a GI tract cancer at 6-months, 1-year, 3-years, and 5-years. The eligible sub-population for each time interval was determined in SAS and exported to Python for model building. For each model prediction interval, subjects were included who had at least 2 CBCs prior to the prediction interval, and at least one CBC within the year prior to the beginning of the prediction interval. For example, for the 1-year prediction interval, subjects were included who had at least one CBC between 1 and 2 years prior to the final observation (diagnosis of GI tract cancer/no cancer). For the 6-month prediction, we included those subjects who had at least one CBC between 6 and 12 months prior to the final observation. For the single timepoint multivariate logistic regression prediction models, we selected observations on the date of the CBC laboratory draw that was closest to the prediction window. Predictor variables included: age, sex, race, most recent BMI, values from the individual components of the CBC on that date (Hemoglobin [Hgb], platelets [Plt], White Blood Cell [WBC] count, etc.), and the values from the most recently available BMP (Sodium [SOD], Glucose [Gluc], blood urea nitrogen [UN], creatinine [Creat], etc.). To incorporate longitudinal elements into the logistic regression and random forest machine learning models, we calculated summary statistics for each subject, summarizing the trends of the individual components of the CBC in the 3 years prior to the prediction window. For example, for each individual component of the CBC (Hgb, Plt, WBC, etc.), we summarized the values over the prior 3 years by the maximum and minimum; the maximum and minimum slope of each predictor variable (i.e., where the slope is the ratio of the change in value/difference in time between two consecutive observations); and the total variation (mean of the absolute value of the slopes). Because the laboratory data were obtained through routine clinical care (irregular intervals that varied between individuals), the use of slopes between observations helped to better describe changes in laboratory values over time. These summary statistics were then added to the predictor variables in the base single timepoint logistic regression models for the longitudinal logistic regression and longitudinal random forest machine learning models. Missing values for individual summary statistics or individual laboratory parameters were determined through imputation using the median value observed across the cohort. ## 2.4. Statistical Analysis We calculated descriptive statistics of the cohort at baseline inclusion. For each prediction interval (6-months, 1-year, 3-years, 5-years), we performed a random $\frac{70}{30}$ split, with $70\%$ of the individuals in a training set and $30\%$ in a testing set. Within each prediction interval, we used a training set to fit single timepoint logistic regression, longitudinal logistic regression, and longitudinal random forest machine learning models and evaluated prediction performances using the same testing set. We repeated this procedure 10 times and reported the mean performance characteristics on the testing set over 10 random splits. We implemented logistic regression models with L2 regularization to minimize the potential effects of overfitting. To tune the optimal penalty coefficient for regularized logistic regression, we conducted 5-fold cross-validation, and then the model was fitted with the selected coefficient using the training set. For the longitudinal machine learning model, we used the random forest technique. Random forest machine learning is an ensemble, tree-based machine learning algorithm used to classify individuals [30,31], which has been used in multiple prior models and described in detail [32,33,34]. Briefly, each tree classifies the individuals independently. Next, the random forest combines the decisions from each tree to generate a final classification for an individual, which can be understood as the majority vote from trees. We also used 5-fold cross-validation to tune the hyperparameters related to the number, size, and feature of trees in the random forest. For both logistic regression and random forest models, we adjusted the class weight using a built-in argument in the Python scikit-learn package to solve the problem of imbalanced classification (rare events of cancers relative to the population). Finally, for each model, we determined the area under the receiver operator curve (AuROC), Brier score (measurement of overall performance), and the optimal (maximal) sensitivity/specificity using the test dataset. To balance the sensitivity and specificity, we determined the optimal cut-point, defined here as the point closest to the perfect classification point [0, 1] on the receiver operator curve. We also determined the relative variable importance rankings for the predictor variables in these models. In addition, we performed additional analysis of the performance of the models at predicting cancers by age categories and by GI tract tumor. ## 3.1. Baseline Cohort We identified 148,158 individuals who met the inclusion criteria (Table 1). The mean age was 49.4 (SD = 17.3) and the majority were women ($62.1\%$, $$n = 91$$,$\frac{995}{148}$,158). Most of the subjects were Caucasian ($81.3\%$, $$n = 120$$,$\frac{385}{148}$,158), with $10.5\%$ being African American ($$n = 15$$,$\frac{510}{148}$,158), and $4.6\%$ Asian ($$n = 6795$$/148,158). Within the cohort, we identified 1025 GI tract cancers during the study period: the majority were CRCs ($53.5\%$, $$n = 548$$/1025), followed by gastric cancer ($16.6\%$, $$n = 170$$/1025), esophageal cancers ($12.5\%$, $$n = 128$$/1025), anal cancers ($8.6\%$, $$n = 88$$/1025), small bowel cancers ($7.7\%$, $$n = 79$$/1025), and not otherwise specified within the GI tract ($1.2\%$, $$n = 12$$/1025). ## 3.2. Single Timepoint Prediction Using Logistic Regression We developed prediction models for the diagnosis of GI tract cancer at 6-months, 1-year, 3-years, and 5-years using multivariate logistic regression at a single timepoint (the last CBC prior to the prediction interval). We included patients’ age, sex, race, BMI, individual components of the CBC, and the most recent BMP (on or prior to the date of the CBC used for prediction). The results of the models’ performance are shown in Table 2. For 6-month prediction of GI tract cancer, the area under the receiver operator curve (AuROC) was 0.697 ($95\%$ CI 0.679–0.715), corresponding to a sensitivity of 0.603 and specificity of 0.690 in this population, with a Brier score of 0.007. At increasing time periods of prediction, the AuROC increased; however, the Brier score also increased to above 0.2, indicating lower performing models (Table 3). ## 3.3. Longitudinal Logistic Regression Model We developed longitudinal logistic prediction models for the diagnosis of GI tract cancer (at 6-months, 1-year, 3-years, 5-years) using the predictor variables from the single timepoint logistic regression model with the addition of summary variables of the longitudinal CBCs (maximum/minimum, total variation, maximum/minimum slopes). Addition of these longitudinal features led to higher AuROCs for prediction at 6-months, 1-year, and 3-years as compared to the corresponding single timepoint logistic regression models (Table 2). For example, the 3-year AuROC was 0.735 ($95\%$ CI 0.713–0.757) compared to 0.683 ($95\%$ CI 0.665–0.701) for the single timepoint logistic regression prediction model (Figure 1). The 1-year longitudinal logistic regression AuROC was 0.705 ($95\%$ CI 0.689–0.722) with a Brier score of 0.008, compared to the 1-year single timepoint logistic regression model of 0.683 ($95\%$ CI 0.665–0.701) with a Brier score of 0.224 (indicating poor performance). ## 3.4. Longitudinal Random Forest Machine Learning Model We developed longitudinal random forest machine learning prediction models for diagnosis of GI tract cancer (at 6-months, 1-year, 3-years, 5-years) using the predictor variables from the single timepoint logistic regression model with the addition of summary variables of the longitudinal CBCs. The random forest model AuROCs were greater than both logistic regression models for 6-months, 1-year, and 3-year predictions (Figure 1), with an AuROC of 0.750 at 3 years ($95\%$ CI 0.729–0.771) and a Brier score of 0.116. However, the confidence intervals of the AuROCs overlapped with the longitudinal logistic regression model for all three time periods (Table 2). The variable importance factors for the random forest machine learning models at 1-year and 3-years demonstrated that the most recent (last) mean platelet volume (MPV), minimum MPV, and age were the three most heavily influential variables in these models (Figure 2). ## 3.5. Subanalysis by Age and Tumor Type We analyzed the longitudinal logistic regression and longitudinal random forest machine learning prediction models for their prediction success at 1- and 3-years by age group and category of GI tract cancer. We selected three age categories: age less than 50, age 50 years or older and less than 75, and age greater than or equal to 75. These ages were selected as they corresponded with screening age groups for colorectal cancer screening during this study period: CRC screening was recommended starting at age 50, not recommended for those less than 50, and individualized screening decision was recommended between ages 75 and 85. There was a trend toward lower AuROCs for those older than 75, suggesting that the models performed less well in this age group, although some of the confidence intervals overlapped, suggesting this was not a statistically significant difference (Table A1). To describe the imbalanced nature of these groups (overall cohort population was younger, with a median age of 49.4), we determined the imbalance ratio as the ratio of the number of negative samples (individuals without cancer) to the number of positive samples (individuals with cancer) in each category. These findings are consistent with established epidemiological trends of increasing GI tract cancers with age. Although the study was not powered to predict individual GI tract cancers, we calculated the performance of the models on the prediction of individual cancers (Table A2). In this setting, the imbalance ratios were more pronounced, e.g., there were only 30 small bowel cancers with sufficient longitudinal data to make 3-year predictions. The models performed less well in the setting of fewer events for this category. ## 4. Discussion The results of this retrospective single-center cohort study demonstrate that data within the electronic health record (including CBCs, BMPs, age, sex, race, BMI) can be used to help predict the diagnosis of luminal GI tract cancers (esophageal, gastric, small intestine, colorectal, and anal), with an AuROC of up to 0.750 for prediction of GI tract cancer at 3 years ($95\%$ CI 0.729–0.771; Brier score = 0.116) with a random forest machine learning model, compared to the longitudinal logistic regression model with an AuROC of 0.735 ($95\%$ CI 0.713–0.757) with a Brier score of 0.205. While there was a trend toward improvement with machine learning compared to longitudinal logistic regression, the overlapping confidence intervals mean the model is not definitively better. One possible explanation is the relative rarity of GI cancers compared to the cohort as a whole and the general need for more events to outperform logistic regression in machine learning techniques [35]. In addition, this lack of superiority of ML has been found in other clinical prediction domains as well, highlighting the strengths of multivariate logistic regression and the difficulties in outperforming these models with newer techniques [34,36]. At 5-years’ prediction, when longitudinal changes would be less likely to have immediate predictive power, the single timepoint logistic regression model had a higher point estimate AuROC than the longitudinal models at 0.703 ($95\%$ CI 0.686–0.720), but with a Brier score of 0.213 (indicating overall lower performance). Nonetheless, this study demonstrates important signals that prediction models of luminal GI tract cancers may be useful adjunctive tools to existing clinical intuition and practice guidelines (e.g., that patients with overt GI bleeding or IDA should undergo endoscopic evaluation) [17]. One important aspect of the random forest machine learning method is that predictor variable associations can be identified that may not be otherwise intuitive. For example, mean platelet volume (MPV) was one of the most important variables in the longitudinal random forest machine learning model. There has been growing interest in the potential usefulness of MPV as a marker of systemic inflammation in GI tract cancers, with possible diagnostic implications for gastric and colorectal cancers [37,38,39,40,41] and possible prognostic implications for esophageal cancers [42]. Clinicians rarely use this feature in routine practice, with a prior clinician survey reporting that clinicians consider MPV to be the least useful component of the CBC [24]. Other predictor variables, such as age, are likely more intuitive to clinicians, consistent with established epidemiologic data of increasing incidence of GI tract cancers with increasing age [2,3]. While these models would be inadequate to replace existing CRC [7,8] screening programs [43], they might still have adjunctive utility. For example, guidelines have already implicitly established a tolerance for the “number needed to scope,” or the number of colonoscopies needed to detect one cancer. For example, guidelines recommend bidirectional endoscopy (EGD and colonoscopy) for new-onset IDA [17], with a number needed to scope for a diagnosis of cancer of between to 10 and 100 (incidence ranging between 1 and $10\%$) [44,45]. Similarly, the threshold for fecal immunochemical testing (FIT) for CRC screening has a PPV of 2.9–$7.8\%$ for diagnosis of GI tract cancer [46], corresponding to a number needed to scope of approximately 13 to 35 to diagnose one cancer. Thus, using these reference points, a prediction model utilizing EHR data could be calibrated to achieve a higher specificity, while tolerating a lower sensitivity (as this would not be replacing routine screening), until the positive predictive value reached an acceptable threshold for recommending diagnostic endoscopy. This type of model would be complementary to existing screening programs. There are several limitations to this study. First, this study was performed retrospectively at an academic medical center, using data collected through routine clinical care, and thus may not apply to other clinical practice settings. The eligible population included all patients receiving care at Michigan Medicine, which includes patients seen by Michigan Medicine primary care providers and specialists. We further narrowed our cohort to individuals with longitudinal follow-up within the Michigan Medicine health system by requiring at least two CBCs over two years. To maximize our eligible cohort, we did not exclude individuals based on the type of provider(s) seen at Michigan Medicine or other exclusions. For this exploratory study, we did not have an external validation cohort, so the accuracy of the models may decline in other settings, as there may be other unmeasured differences between populations. Second, because our inclusion criteria required the presence of at least two CBCs per patient (to determine longitudinal trends), there may be unknown, systematic differences between these patients and those with fewer CBCs (who were excluded from the cohort). An alternate model incorporating a single CBC could have advantages in a setting where primary care follow-up is limited or where CBCs are less commonly obtained. It is also reasonable to consider whether the increased performance of a longitudinal model is “worth” the added computational complexity that would be required for its deployment. Third, for purposes of this analysis, we considered the diagnosis of any GI tract cancer as a binary outcome, given the relatively rare incidence of GI tract cancers relative to the cohort size. However, the tumor biology of GI tract cancers is heterogenous. As a result, there may be different patterns specific for individual subtypes of GI tract cancer that this study was not powered to detect. Models focused solely on a single organ have the potential to have higher specificity but would not be designed to detect other GI tract tumors. Fourth, we were limited by predictor variable inclusion in our models due to a high degree of missingness of some suspected useful variables (CBC with differentials, ferritin), potentially limiting the predictive power relative to prior models for CRC that incorporated CBCs with differentials [20]. An additional limitation is that these models do not incorporate additional clinical history such as the findings of prior EGD or colonoscopy procedures, but we intentionally chose to focus on readily ascertainable parameters from existing EHR data for easier potential use in the future. ## 5. Conclusions Nonetheless, despite these limitations, this work offers some important contributions to the diagnostic evaluation of GI tract cancers, demonstrating that logistic regression or random forest machine learning models using EHR data can help predict the presence of GI tract cancers. Improved diagnosis in this domain is critical. First, given epidemiologic trends with an increasing incidence of CRC at younger ages, additional detection strategies are needed to identify diseases earlier in this younger cohort, who would not yet have undergone routine CRC screening. Second, given limited endoscopic access in some settings, methods to identify patients at greatest risk of GI tract cancer are increasingly important, as they could help prioritize GI diagnostic evaluations on those individuals at greatest risk. Third, prior work has demonstrated that IDA is not always diagnosed or evaluated fully, meaning that additional automated methods of helping clinicians identify patients at increased risk of GI tract cancers are needed. In summary, these models could help fill an important need and assist clinicians in the diagnosis of GI tract cancers. Further refinement and evaluation of these models in a larger cohort, with external validation, is needed prior to any potential prospective clinical evaluation.
# Caregiving Self-Efficacy of the Caregivers of Family Members with Oral Cancer—A Descriptive Study ## Abstract In Taiwan, oral cancer is the fourth most common cause of cancer death in men. The complications and side effects of oral cancer treatment pose a considerable challenge to family caregivers. The purpose of this study was to analyze the self-efficacy of the primary family caregivers of patients with oral cancer at home. A cross-sectional descriptive research design and convenience recruiting were adopted to facilitate sampling, and 107 patients with oral cancer and their primary family caregivers were recruited. The Caregiver Caregiving Self-Efficacy Scale-Oral Cancer was selected as the main instrument to be used. The primary family caregivers’ mean overall self-efficacy score was 6.87 (SD = 1.65). Among all the dimensions, managing patient-related nutritional issues demonstrated the highest mean score (mean = 7.56, SD = 1.83), followed by exploring and making decisions about patient care (mean = 7.05, SD = 1.92), acquiring resources (mean = 6.89, SD = 1.80), and managing sudden and uncertain patient conditions (mean = 6.17, SD = 2.09). Our results may assist professional medical personnel to focus their educational strategies and caregiver self-efficacy enhancement strategies on the dimensions that scored relatively low. ## 1. Introduction In 2018, oral cancer incidence and death were the highest among men in Taiwan, and oral cancer was the fourth most common cause of cancer-induced death in men [1]. When patients with oral cancer undergo treatment and experience its side effects [2,3,4], patients themselves, their families, and medical caregivers encounter great challenges in caregiving. Stage classification of oral cancer includes four stages according to the size of the primary tumor (T), involvement of locoregional lymph nodes (N), and distant metastases (M) [5,6,7,8]. Stage I is determined by T1–2 and N0–1, stage II by T1–2 and N2 or T3 and N0–2, and stage III by T4 or N3. Stage IV is for patients with metastatic disease [7]. This classification can aid in treatment planning, the estimation of recurrence risk, and the assessment of patient survival [5]. The overall 5-year survival rate for patients in a cohort study at Memorial Sloan Kettering Cancer Center was $63\%$ [9]. In a multicenter retrospective analysis, an advanced T stage was significantly correlated with poor overall survival and disease-specific survival of patients [10]. Lymph node involvement is the most important prognostic factor in oral cancer. The survival rate is reduced by $50\%$ when compared with those with similar primary tumors without neck lymph node involvement [11,12]. The impact of oral cancer at different stages on patients’ physical symptoms and impairments was supported, especially the impact of advanced oral cancer [13]. Oral cancer treatment may involve the combined use of surgery, chemotherapy, and radiotherapy, among which surgery is the most essential [14]. However, surgical treatment may change patients’ facial appearance and cause oral disabilities, such as impaired communication and eating functions [2]. In addition, patients with oral cancer encounter the side effects of chemotherapy or radiotherapy. Therefore, care for oral cancer is more challenging than that for other cancers [15]. In Taiwan, family members play a crucial role in the home care of patients with oral cancer, as exemplified by the trends during outpatient treatment. For instance, these family members handle patients’ nutritional problems, make care decisions, manage disease-related emergencies, and seek relevant resources [16]. However, the difficulties they encounter during home care [16] may discourage these family members from putting effort into patient care, particularly when they lack belief in their own capability, worsening the subsequent care results. Self-efficacy refers to an individual’s capability belief or perceived capability to perform specific health care behavior [17]. During health care processes, self-efficacy is an essential ability that helps individuals overcome difficulties and strive for better health [18]. Self-efficacy is a key factor that affects health care behavior [19] because self-efficacy positively affects individuals’ behavioral motivation and persistence when they encounter care difficulties [18]. In the research literature, investigations that examined the difference in gender regarding self-efficacy produced inconsistent findings. Several researchers described self-efficacy as one factor that accounts for gender differences [20,21]. While some researchers suggested that men reported greater self-efficacy than women [20], others suggested that females reported greater efficacy than men [21]. In contrast, no gender differences regarding self-efficacy ratings were noted in some studies [22,23,24]. Bandura [25] also suggested that age may be a factor that contributes to personal efficacy due to the biological processes of aging resulting in declining ability. Research on the effects of age on self-efficacy has produced mixed results [20,22,23,24]. Several studies indicated no relationship between self-efficacy ratings and age [20,22,24]. Educational and socio-economic levels may also be personal factors that are associated with self-efficacy since they lead to better access to resources. A researcher has suggested self-efficacy expectations as one factor that accounts for educational differences in responses to outcome measures [22]. However, several studies showed no relationship between self-efficacy and educational levels [23,24]. Most studies on the effects of economic levels on self-efficacy showed no significant difference [23,24]. Understanding the self-efficacy of family caregivers can assist medical teams to understand their capability belief in taking care of patients with oral cancer at home, identify relevant influential factors, and provide countermeasures to enhance their capability belief in patient care. This may improve the home care quality for patients with oral cancer. Therefore, the purpose of this study was to assess the self-efficacy of the primary family caregivers of patients with oral cancer at home. ## 2.1. Study Design The current study adopted a cross-sectional descriptive research design and convenience recruiting to facilitate the sampling and discussion on the self-efficacy of the primary family caregivers of patients with oral cancer at home. ## 2.2. Sample and Procedure In total, 107 primary family caregivers of outpatients were recruited for a structured questionnaire survey. The participants were enrolled from the radiology outpatient department of a teaching hospital in northern Taiwan from May 2016 to May 2018. Only patients who [1] were aged ≥20 years; [2] were diagnosed as having oral cancer; and [3] received oral cancer-related surgery, chemotherapy, or radiotherapy were included. Moreover, the family caregivers of these patients were required to be [1] aged ≥20 years, [2] recognized as the primary family caregivers by the patients, and [3] living with the patients. After this study passed the ethical review and the family caregivers signed the informed consent form, a research assistant distributed our questionnaires to the family caregivers. The assistant checked whether the retrieved questionnaires were completely filled out immediately after the caregivers submitted them. The participants who missed items were asked to fill them out. Regarding patient medical characteristics, they were all collected from medical records by the research assistant. ## 2.3. Ethical Considerations This study was approved by the institutional review board of a teaching hospital in northern Taiwan (VGHIRB No.: 2014-04-001AC). The research assistant verbally explained the research objective, data protection principles, and research procedures to obtain the participants’ consent and asked them to sign the informed consent form. Codes were used in the questionnaire in place of personal information to protect participant privacy. For participants who were unwilling to proceed with the survey or were not physically suitable for further investigations, the research assistant acknowledged their withdrawal intention and stopped collecting their data. ## 3.1. Sociodemographic Variables The current study collected the sociodemographic variables of the family caregivers and patients’ medical characteristics. The collected sociodemographic variables were sex, age, marital status, education level, religious affiliation, employment status, and household income. The collected medical characteristics were the time of sickness, stage of cancer, current treatment status, and treatment side effects. Information related to the family caregivers, such as the family caregivers’ relationships with the patients, manner of care, and care time, were also collected. ## 3.2. Caregiver Caregiving Self-Efficacy Scale-Oral Cancer The current study applied the Caregiver Caregiving Self-Efficacy Scale-Oral Cancer (CSES-OC) [26] to estimate the self-efficacy of the family caregivers. The scale consisted of 18 items. According to factor analysis, the scale could be divided into four subscales: acquiring resources (AR; six items), managing sudden and uncertain patient conditions (MS; five items), managing patient-related nutritional issues (MN; four items), and exploring and making decisions on patient care (MD; three items). Some examples of the items for AR are “I am confident that I am able to acquire financial support”, “I am confident that I am able to seek consultation on the provision of sick family member care”, and “I am confident that I am able to acquire respite from caregiving”. Examples for MS are “I am confident that I am able to manage the sudden onset of conditions in the sick family member”, “I am confident that I am able to handle uncertainty about cancer progression”, and “I am confident that I am able to handle the sick family member’s uncertainty about death”. Examples for MN are “I am confident that I am able to prepare a suitable diet” and ”I am confident that I am able to improve the sick family member’s willingness to eat”. Examples of the items for MD are “I am confident that I am able to explore the most suitable care for the sick family member” and ” I am confident that I am able to make decisions on sick family member care”. The Cronbach’s alpha of each subscale ranged between 0.78 and 0.91, and that of the overall scale was 0.95. The test–retest reliability with a 2-week interval was $r = 0.83$ ($p \leq 0.001$), and its criterion-related validity with the General Self-Efficacy Scale was $r = 0.59$ ($p \leq 0.001$). Regarding the scale used, an 11-point Likert-type scale ranging from 0 (not at all confident) to 10 (completely confident) points was adopted, where the higher the total score, the higher the self-efficacy [26]. ## 3.3. Statistical Analysis The current study used SPSS for Windows (version 22.0; SPSS, Chicago, IL, USA) for the data processing. Descriptive statistics, such as means, SDs, frequencies, and percentages, were obtained to examine the family caregivers’ sociodemographic variables, patients’ medical characteristics, caregiver–patient relationships, manner of care, care times, and caregiving self-efficacies. The differences in the variables in caregiving self-efficacy (e.g., family caregivers’ sociodemographic variables, patients’ medical characteristics, caregiver–patient relationships, and manner of care) were estimated using the independent sample t-test and analysis of variance (ANOVA). In addition, a Pearson product–moment correlation test was performed to verify the correlation between caregiver age, care time, patient time of sickness, and caregiving self-efficacy. ## 4.1. Sociodemographic Variables of the Primary Family Caregivers and the Manner of Care The current study recruited 107 primary family caregivers as participants, with a mean age of 51 years (SD = 10.8 years, range = 20–70 years). Among the participants, $91.6\%$ were female, $72.9\%$ were the patients’ spouses, $56.1\%$ had an education level of senior high school and above, $87.9\%$ were married, $26.2\%$ were continuing their job, $47.7\%$ had an annual household income of <TWD 500,000, $86.9\%$ had a religious affiliation, and $26.2\%$ had a chronic disease (Table 1). Moreover, $41.1\%$ provided care with the assistance of other caregivers, $40.2\%$ provided care without rest, $83.20\%$ had no experience in patient care, and the mean care time was 36.4 months (SD = 40.3 months, range = 1–171 months; Table 1). ## 4.2. Patients’ Medical Characteristics Among the 107 patients with oral cancer, the mean time of sickness was 42.5 months (SD = 44.4 months, range = 1–171 months). Of all the patients, $36.4\%$ had stage IV oral cancer, $78.5\%$ had completed their treatment, and $36.4\%$ were still experiencing the side effects of the treatment (Table 2). ## 4.3. Caregiving Self-Efficacy of the Primary Family Caregivers The CSES-OC was used to measure the self-efficacy of the primary family caregivers. The overall and subscale (i.e., AR, MS, MN, and MD) scores were considered. The mean overall self-efficacy score was 6.87 (SD = 1.65). Moreover, of all the subscales, MN demonstrated the highest mean score of 7.56 (SD = 1.83), followed by MD (7.05, SD = 1.92), AR (6.89, SD = 1.80), and MS (6.17, SD = 2.09) (Table 3). ## 4.4. Differences in the Sociodemographic Variables of the Primary Family Caregivers and Manner of Care in Caregiving Self-Efficacy No significant correlations were discovered between the overall self-efficacy score and age ($r = 0.06$, $p \leq 0.05$) and between the overall self-efficacy score and care time ($r = 0.08$, $p \leq 0.05$). Moreover, no significant differences were noted for the other sociodemographic variables and manner of care in caregiving self-efficacy (Table 1). ## 4.5. Differences in Medical Characteristics in Caregiving Self-Efficacy No significant correlations were discovered between the time of sickness and the overall self-efficacy score ($r = 0.11$, $p \leq 0.05$). Moreover, the differences among patients’ other medical characteristics in the overall self-efficacy were nonsignificant (Table 2). ## 5. Discussion In this study, the researchers analyzed the caregiving self-efficacy of the primary family caregivers of patients with oral cancer. Results of the current study may aid professional caregivers in understanding the capability belief of primary family caregivers in facing challenges during the care process and the most challenging tasks they are likely to encounter. According to the self-efficacy classification proposed by Kobau and DiIorio [27], a self-efficacy score of 4–7 (range: 0–10) denotes a moderate level of self-efficacy. Here, the mean caregiving self-efficacy score was 6.87, indicating that the caregivers in this study had moderate self-efficacy. However, because the scoring methods used for measuring self-efficacy have varied between previous relevant studies [28,29,30], the researchers could not compare the results of the current study with those of other studies directly. The mean self-efficacy score of the current study was close to that of Liang, Yates, Edwards, and Tsay [22], where the opioid-taking self-efficacy of patients with cancer was estimated, and it was slightly lower than that of Kobau and DiIorio [27], where the self-efficacy of patients with epilepsy was assessed. The possible reason for this was that the care difficulty differed between diseases, which may have affected the participants’ perceived level of capability. Here, the caregiving self-efficacy in the MN dimension scored the highest, with a mean score of 7.56. Handling the nutritional issues of patients might not be the most challenging task for caregivers. Increasing their willingness to eat and preparing suitable food for them [26] were found to be essential behavior tasks to promote their physiological recovery. The self-efficacy in the MD dimension scored the second highest, with a mean score of 7.05. In this dimension, the behavior tasks relevant to caregiving self-efficacy included managing the side effects due to cancer treatment and making treatment-related decisions [26]. These types of behavior tasks aim at providing home-based medical assistance. Moreover, the AR dimension scored the third highest, with a mean score of 6.89. Here, the caregiving self-efficacy-related behavior tasks encompass managing emotional issues, receiving care counseling, and being able to rest during the care process [26]. Emotional management was related to tasks such as dealing with the emotions of patients who were facing oral cancer treatment and prognosis, as well as the emotions of caregivers themselves [16,26]. According to the current results, this was the second most challenging set of behavioral tasks. It was a self-assistance behavior task related to the maintenance of the physical and mental health of the caregivers themselves. Finally, the MS dimension scored the lowest, with a mean score of 6.17. For caregivers, handling the safety and death issues of patients was the most challenging task. The caregiving self-efficacy-related behavior tasks include handling sudden situations, managing the uncertainty in the disease process, and managing poor prognosis [26]. These most difficult care tasks indicate the care priorities for patients and their family caregivers for health care professionals. Family caregivers’ capability belief (i.e., self-efficacy) is a key factor that affects subsequent care behavior and care results [31,32]. Professional medical personnel can increase family caregivers’ capability belief according to the four sources of efficacy beliefs in the self-efficacy theory: family caregivers’ performance accomplishment, vicarious experience, professional caregivers’ verbal persuasion, and consideration of family caregivers’ physical and emotional arousal [17,32]. Furthermore, professional medical personnel could integrate relevant educational strategies, including diary logs [33], videos, and brochures [32], to improve family caregivers’ capability beliefs in taking care of patients with oral cancer. In this study, the researchers adopted a cross-sectional descriptive research design. Therefore, the current study could not obtain the changes in family caregiving self-efficacy with respect to the patient’s condition or required care time. The present study involved all patients in the disease period. The timing of patient enrollment was not controlled. Some patients were still undergoing their course of treatment, some patients had finished their treatment. Different times or stages of treatment may affect the challenge of the care of family and, therefore, may affect their ability cognition. In addition, the sample size was small for all sociodemographic and medical variable groups. It is unlikely that statistical differences could be detected in this population. On the other hand, the current research used convenience sampling, which may have caused sampling deviation. Families with large care loads may have been eliminated naturally. The samples were collected from a teaching hospital in northern Taiwan alone, which might affect the inference of the current results. ## 6. Conclusions Our current results indicated that family caregiving self-efficacy scores in the CSES-OC MS and AR items were the lowest and the second lowest, respectively. The current study recommends that professional medical teams focus their educational strategies and caregiver self-efficacy enhancement strategies on the dimensions that scored relatively low (i.e., handling patients’ safety and death issues and managing physical and mental health problems through self-assistance). For example, issues in these dimensions include managing the emotional distress of a sick family member and the caregiver themself, handling uncertainty about the sick family member’s cancer progression and death, and managing the sudden onset of conditions in the sick family member. Through family caregivers’ performance accomplishment, vicarious experience, professional caregivers’ verbal persuasion, consideration of caregivers’ physical and emotional arousal, and using educational media, the self-efficacy of family caregivers regarding taking care of a patient with cancer may be increased. The current results are from an exploratory study. The cut-off point of the self-efficacy score in this study refers to the research results of other patient groups. The current study suggests that more studies are needed.
# Decoy Receptors Regulation by Resveratrol in Lipopolysaccharide-Activated Microglia ## Abstract Resveratrol is a polyphenol that acts as antioxidants do, protecting the body against diseases, such as diabetes, cancer, heart disease, and neurodegenerative disorders, such as Alzheimer’s (AD) and Parkinson’s diseases (PD). In the present study, we report that the treatment of activated microglia with resveratrol after prolonged exposure to lipopolysaccharide is not only able to modulate pro-inflammatory responses, but it also up-regulates the expression of decoy receptors, IL-1R2 and ACKR2 (atypical chemokine receptors), also known as negative regulatory receptors, which are able to reduce the functional responses promoting the resolution of inflammation. This result might constitute a hitherto unknown anti-inflammatory mechanism exerted by resveratrol on activated microglia. ## 1. Introduction Resveratrol (3,5,4′-trihydroxy-trans-stilbene), also called polyphenol, is a stilbenoid belonging to the phytoalexin superfamily, mostly found in red grapes, blueberries, raspberries, mulberries, and peanuts [1]. Resveratrol has two isomers with trans and cis configurations. In this regard, the trans-resveratrol is the non-toxic stereoisomer that has been widely described to have beneficial effects on health [2]. Resveratrol is part of a group of compounds that act as antioxidants do, protecting the body against diseases, such as diabetes, cancer, heart disease, ileitis, obesity, and neurodegenerative disorders, such as Alzheimer’s (AD) and Parkinson’s diseases (PD) [3,4,5,6]. In this respect, in experimental models of both AD and PD, it has been showed that resveratrol exerts neuroprotective actions; however, its application in therapeutic protocols is limited by its poor bioavailability due to quick metabolization in the intestine and liver [3,4,5,6,7]. Resveratrol is able to cross the blood–brain barrier (BBB) via tight junctions, thus carrying out a protective action in the brain tissue that could reduce the loss of neurons, which arises due to neurodegenerative diseases [4,5,6,7,8]. Many studies carried out in recent years have focused on researching the therapeutic potential for the additional treatment of neurodegenerative diseases of many natural compounds, in particular those extracted from plants [5,6,7,8,9,10]. Among the vast diversity of natural compounds that have been studied for their neuroprotective effects, there are polyphenolic compounds, such as curcumin, capsaicin, epigallocatechin gallate, and resveratrol too [7,8,9,11,12,13]. Apart from having antioxidant and anti-inflammatory actions, resveratrol modulates the intracellular signals involved in neurons survival and inhibits beta-amyloid (Aβ) protein aggregation [10,11,12,13,14]. Consistent with these data, it was reported in a mouse model of Parkinson’s-like disease that a resveratrol treatment protects the dopaminergic (DA) neurons of the Substantia Nigra pars compacta (SNpc) against neurotoxic insult by modulating inflammatory reactions through SOCS-1 activation [11,12,13,14,15]. Decoy receptors are involved in mechanisms of immune evasion adopted by pathogens, including IL-1R2 and atypical chemokine receptors (ACKRs). In IL-1R2, the lack of the intracellular TIR domain makes this receptor unable to initiate signal transduction following binding with IL-1 [12,16]. The main types of ACKRs are ACKR1, ACKR2, ACKR3, and ACKR4 [13,17]. These molecules are able to recognize and bind specific growth factors or inflammatory cytokines efficiently; however, they are structurally incapable of initiating and transducing signals, acting as a molecular trap for the agonist and for signaling receptor components. All of these members, also referred as chemokine-binding proteins, scavengers, receptor antagonists, negative regulatory receptors, anti-inflammatory ligands, and decoys, act as brakes in the functional responses [14,18]. IL-1R2 functions as a negative regulator of several IL-1 family members, as well as of TLRs, thus it is involved in several pathophysiological contexts in which inflammation and innate and adaptive immune responses play a significative role [19]. ACKR2, previously known as D6, through inhibiting inflammation, mediates the resolution of inflammation in various conditions such as infections, autoimmune diseases, cancer, and neurodegenerative conditions [14,15,16,17,18,19,20]. We reported in a previous work that in LPS-activated cells, the pre-treatment of microglia with resveratrol up-regulated the phosphorylation of JAK1 and STAT3, as well as the expression of the suppressor of cytokine signaling (SOCS)3, demonstrating that the JAK-STAT signaling pathway is involved in the anti-inflammatory effect exerted by resveratrol [15,16,17,18,19,20,21]. The aim of the present research was designed to determine the potential anti-inflammatory effects of resveratrol through the regulation of decoy receptor expression, IL-1R2 and ACKR2, on the activated microglia after prolonged exposure to LPS. The results obtained from this study provide, for the first time, evidence of a new anti-inflammatory mechanism exerted by resveratrol on the activated microglia. ## 2.1. Cell Culture and Treatments The murine microglial cell line N13 was grown in RPMI 1640 basal medium enriched with $10\%$ heat-inactivated fetal bovine serum (FBS), $1\%$ L-glutamine (2 mM), and $1\%$ penicillin-streptomycin solution (100 U/mL penicillin; 100 μg/mL streptomycin) (Life Technologies-Invitrogen, Milan, Italy) in a CO2 incubator set to $5\%$ CO2 at 37 °C in a humidified atmosphere until $70\%$ confluence. For the treatments, we used 10 μM resveratrol (trans-3,40, 5-trihydroxystilbene; purity > $99\%$ GC; Sigma Aldrich, St. Louis, MO, USA) and the cell wall component LPS of *Salmonella typhimurium* at a concentration of 100 ng/mL. N13 cells were submitted to a single treatment with LPS or resveratrol and to a combined treatment with resveratrol, followed up an hour later by LPS (Sigma Aldrich) for 72 h. ## 2.2. Cytotoxicity Assay Cell viability of N13 cells was evaluated by the MTT (3-(4,5-Dimethylthiazol-2-yl)-2,5-Diphenyltetrazolium Bromide) assay. The cells were seeded in 96-well multi-well plates at a density of 8 × 103/well to be treated with LPS alone or in presence of resveratrol. The MTT was solubilized in PBS 1X to be added to the wells at a working concentration of 0.5 mg/mL starting from a stock solution of 5 mg/mL. After 4 h of incubation in a CO2 incubator at 37 °C in a humidified atmosphere, the formazan crystals were solubilized in Dimethyl sulfoxide (DMSO) keeping the plates in agitation for 20 min. Since the amount of formazan is directly proportional to the number of viable cells, it is quantified by measuring the optical density at 560 nm and subtracting the background at 670 nm by using a Victor Multiplate Reader (Wallac, Perkin Elmer, Milan, Italy). ## 2.3. Reverse Transcriptase-Polymerase Chain Reaction (RT-PCR) and End-Point PCR Cells were harvested, and the total RNA was extracted by using the TRIzol isolation reagent (Invitrogen, Milan, Italy) according to the manufacturer’s instructions. Once isolated, the RNA was reverse transcribed back into cDNA, causing a reaction between 3 μg of total RNA, 40 U of RNase Out (Invitrogen), 40 mU of oligo dT, 0.5 mM dNTP (PCR Nucleotide Mix, Roche Diagnostics, Milan, Italy), and 40 U of Moloney Murine Leukemia Virus Reverse Transcriptase (Roche Diagnostics). The cDNA synthesis was initiated at 37 °C for 59 min and terminated at 95 °C for 5 min to remain at 4 °C. The cDNA was amplified by performing a polymerase chain reaction (PCR) for 30 cycles using a thermal cycler (Eppendorf, Milan, Italy) together with the cDNA of GAPDH, which was used as a reference gene. At the completion of the PCR, TriTrack Loading Dye 6X (Thermo Fisher, Waltham, MA, USA) was added to the amplified samples prior to be loaded onto the agarose gel. The DNA bands were quantified by densitometry with the ImageJ software, and the results were normalized with GAPDH. Primer sequences for the tested genes are reported in Table 1. ## 2.4. Electrophoresis After 72 h from the treatments, the cells were harvested and lysed with a lysis buffer ($1\%$ Triton X-100, 20 mM Tris-HCl, 137 mM NaCl, $10\%$ glycerol, 2 mM EDTA, 1 mM phenylmethylsulfonyl fluoride (PMSF), 20 μM leupeptin hemisulfate salt, and 0.2 U/mL aprotinin) and subjected to many cycles of freezing and thawing to facilitate the lysis. The lysates were obtained by centrifugation at 12,800× g for 20 min at 4 °C, and the proteins were quantified by the Bradford’s protein assay [22,23,24,25,26,27,28,29,30,31,32,33,34,35]. A quantity of 25 μg of proteins from each sample were diluted with a sample buffer (0.5 M Tris-HCl pH 6.8, $10\%$ glycerol, $10\%$ (w/v) SDS, $5\%$ β2-mercaptoethanol, and $0.05\%$ (w/v) bromophenol blue), and then boiled for 3 min. At the end, the samples were loaded onto $4\%$–$12\%$ SDS precast polyacrylamide gels (BioRad Laboratories, Tokyo, Japan) and fractionated in relation to the size by applying a voltage of 200 V. ## 2.5. Western Blotting Upon completion of electrophoresis, the resolved proteins were transferred onto ni-trocellulose membranes and blocked with $5\%$ fat-free milk diluted in a solution containing $0.1\%$ (v/v) Tween $20\%$ and PBS to avoid nonspecific binding. After three ten-minute washes with $0.1\%$ Tween 20-PBS (T-PBS), the membranes were incubated overnight at 4 °C with mouse monoclonal antibody (mAb) anti-CD11b (1:200), anti-iNOS (1:200) mAb, anti-COX-1 (1:200) mAb, anti-COX-2 (1:200) mAb, anti-phospho-cPLA2 (1:200), anti-phospho-IkBα (1:200) mAb, anti-IL-1R2 (1:100) mAb, and anti-ACKR2 receptor (1:100) mAb, and mouse polyclonal Ab anti β-actin (all from Santa Cruz Biotechnology, Inc., Milan, Italy) according to the manufacturer’s protocol. Then, the membranes were washed with $0.1\%$ Tween 20-PBS (for 20 min, 3 times) and incubated with specific horseradish peroxidase (HRP)-conjugated secondary antibody anti-mouse (Santa Cruz Biotechnology, Milan, Italy) diluted to 1:10,000 for 1 h in agitation and in the dark. At the end, the protein bands were highlighted by chemiluminescence, and images were acquired using a ChemiDoc Imaging System. The bands were normalized against β-actin, and the results, expressed as means ± SD, were provide as the relative optical density. ## 2.6. Enzyme-Linked Immunosorbent Assay (ELISA) The sandwich ELISA was performed following the kit manufacturer’s instructions to measure the levels of TNF-α (Cat. # BMS607-3 Thermo Fisher—Invitrogen Technology, Milan, Italy) and IL-1β (Cat. # BMS6002 Thermo Fisher—Invitrogen Technology, Milan, Italy) cytokines in the cell culture supernatants withdrawn after 72 h from the treatments. Since the intensity of the signal is directly proportional to the concentration of the antigen, the concentration was quantified and expressed in pg/mL. The determinations were performed in triplicate. ## 2.7. NO Production NO, quantified as the NO2− concentration in the cell culture supernatants, was determined by the Griess assay. The supernatants were collected after 72 h from the treatments and centrifuged to remove possible cellular residues. After adding the Griess Reactive ($0.1\%$ N-(1-naphthyl) ethylenediamine dihydrochloride and $1\%$ sulfanilamide in $2.5\%$ H3PO4) (1:1 v/v), the samples were incubated in the dark at room temperature for 10 min. At the end, the absorbance was spectrophotometrically measured at 540 nm by using the conditioned medium as a blank to clear the interference of nitrites. The NO2− concentration was calculated by interpolation on a standard curve of sodium nitrite (NaNO2) and is expressed as μmol/mL. ## 2.8. PGE2 Assay To measure the PGE2 in the cell culture supernatants, we performed the PGE2 assay. N13 cells (3 × 106/well) were seeded in 6-well plates, pre-treated with resveratrol for 1 h, and then stimulated with LPS at a concentration of 100 ng/mL. The cultures were maintained at 37 °C for 72 h in a humidified air containing $5\%$ CO2. The PGE2 levels were determined in the supernatant using the competitive binding immunoassay (Cayman Chemical, Ann Arbor, MI, USA) according to the manufacturer’s instructions. Unstimulated cells were included as a control. The optical density was measured at λ = 405–420 nm using a precision microplate reader, and the PGE2 concentration, expressed in ng/mL, was determined by using a PGE2 standard curve. ## 2.9. Statistical Analysis The statistical analysis was carried out with the software package MINITAB Release 14.1 (Minitab Ltd., Coventry, UK). The results were analyzed by the ANOVA one-way followed by the Tukey test, assuming that the p-values ≤ 0.05 were significant. ## 3.1. Effects of Resveratrol and Prolonged LPS Treatment on Cell Viability of N13 Microglial Cells The effect of the pre-treatment with resveratrol on the N13 cells treated with LPS was verified by MTT cell viability test. We used an optimal concentration of LPS (100 ng/mL) and an optimal non-toxic resveratrol concentration (10 μM) selected on the basis of the experiments reported in our previous works [15,16,21,23]. Furthermore, in the experiments of this work, we used prolonged exposure to LPS by treating the N13 cells with LPS for 72 h. The viability of the cells exposed for 72 h to 100 ng/mL LPS was significantly reduced in comparison to that of the untreated cells; however, the pre-treatment with resveratrol was able to significantly increase the cell viability in the cells treated with LPS with respect to that of the cells treated with LPS alone (Figure 1). ## 3.2. Resveratrol Modulates CD11b Expression Levels in LPS-Treated N13 Microglial Cells The pre-treatment with resveratrol of the N13 cells treated with LPS determined the modulation of the expression of the microglial activation marker CD11b both at transcriptional and post-transcriptional levels (Figure 2). In particular, resveratrol is able to determine a significant decrease in the mRNA expression levels of CD11b in the N13 cells treated with LPS in comparison to those observed in cells treated with LPS alone (Figure 2A). The same results were observed for CD11b protein expression. In this context, the treatment with LPS induced a significantly higher increase in the CD11b protein expression levels in the cells treated with LPS in comparison to that of the control cells. In addition, resveratrol showed the ability to significantly decrease the CD11b protein expression levels in the N13 cells treated with LPS compared to that observed in the cells treated with LPS alone (Figure 2B). These results confirm the role of resveratrol as a modulator of microglial activation even in case of prolonged exposure to LPS. ## 3.3. Effects of Resveratrol on Nitric Oxide Production and Inducible Nitric Oxide Synthase Protein Expression Levels in LPS-Treated N13 Microglial Cells To evaluate the effect of resveratrol on NO production in the N13 cells subjected to prolonged exposure to LPS, the levels of NO produced by the microglia treated for 72 h with LPS in the absence and in the presence of resveratrol were tested. The levels of NO released by the untreated cells and those treated with resveratrol alone were low. The treatment of microglia with LPS for 72 h, on the other hand, resulted in a significant increase in NO release compared to that which was observed in the control cells. Conversely, the cells treated with LPS pre-treated with resveratrol showed a significant reduction of NO production in comparison to that of the cells treated with LPS alone (Figure 3A). In addition, to evaluate whether the inhibitory effect of resveratrol on NO production could derive from an action of resveratrol on the inducible isoform of NO synthase (iNOS), the protein expression of iNOS after the different treatments was determined by the Western blot. Again, significantly higher levels of iNOS protein expression were found in the cells treated with LPS alone in comparison to that which was shown by the untreated cells. Similarly, as observed for NO release, the pre-treatment with resveratrol was able to significantly inhibit the expression of iNOS in the microglia submitted to prolonged exposure to LPS (Figure 3B). ## 3.4. Effects of Resveratrol on Pro-Inflammatory Cytokine Production in LPS-Treated N13 Microglial Cells Pro-inflammatory cytokines production levels were assessed in culture supernatants by ELISA both in the presence and absence of resveratrol. As shown in Figure 4, there was a marked increase in TNF-a and IL-1β production in the microglial cells after 72 h of LPS stimulation. No effect by the treatment with resveratrol alone on the pro-inflammatory cytokine production was observed in the microglial cells. Moreover, we observed that the treatment with 10 μg/mL of resveratrol in the LPS-treated cells significantly down-regulated the production levels of pro-inflammatory cytokines in comparison to those of the N13 cells stimulated with LPS alone, suggesting that resveratrol was able to negatively modulate the production levels of pro-inflammatory cytokines in LPS-activated microglial cells. ## 3.5. Effects of Resveratrol on Arachidonic Acid (AA) Pathway in LPS-Treated Microglia Cyclooxygenase-2 (COX-2) and phospholipase A2 (cPLA2) participate in eicosanoid production, such as prostaglandin E2 (PGE2), which is implicated in the Arachidonic Acid (AA) pathway and is a key factor in neuroinflammatory and neurodegenerative diseases. Moreover, it is well known that COX-1 could be an important player in neuroinflammation by being predominantly localized in the microglia, and thus, being implicated in the secretion of prostaglandins (PGs) in response to microglia activation [17,24]. For this reason, in microglial cells exposed for a prolonged time to LPS, we have verified the anti-inflammatory ability of resveratrol in terms of the modulation of the of COX-1, COX-2, and p-cPLA2 protein expression. In addition, the evaluation of COX activity by the quantification of the PGE2 production by the enzymatic conversion of AA has been widely used and is well accepted as a method to evaluate potential COX inhibitors [18,25]. Therefore, we also verified the inhibitory action of resveratrol on the release of PGE2 in N13 cells treated with LPS for 72 h. From our results, it appears to be evident that resveratrol is able to determine a significant decrease in the expression levels of COX-1, COX-2, and p-cPLA2 in the cells treated for 72 h with LPS that had undergone a pre-treatment of 1 h with resveratrol in comparison to those of the cells treated with LPS alone (Figure 5A–C). We observed similar results in the PGE2 release assay. Resveratrol, in fact, determined a significant decrease in the release of this inflammatory mediator in the microglia exposed to the prolonged treatment with LPS and pre-treated with resveratrol compared to those subjected to the treatment with LPS alone (Figure 5D). From these results, it is, therefore, possible to highlight that resveratrol, in cases of prolonged inflammation, is able to show an anti-inflammatory effect by inhibiting the AA pathway. ## 3.6. Effects of Resveratrol on NF-kB Pathway in LPS-Treated N13 Microglial Cells In order to evaluate NF-kB activation, we measured the levels of the phosphorylated form of IkBα (p-IkBα), the inhibitory complex of NF-kB, since its phosphorylation is an essential step for NF-kB activation. In this regard, we determined the expression of p-IkB in cell lysates obtained from LPS-stimulated N13 microglial cells. In this context, we observed that the LPS treatment for 72 h significantly increased the expression level of phosphorylated IkB-α protein compared to that of the control cells, and the resveratrol pre-treatment significantly prevented this increase, as revealed by the densitometric analysis (Figure 6). These data indicate that resveratrol inhibited NF-kB activity in the LPS-treated N13 cells by suppressing the degradation of IkB-α, and consequentially, relieving the pro-inflammatory mediator’s expression. ## 3.7. Effects of Resveratrol on IL1-R2 and ACKR2 Decoy Receptor Expression in LPS-Treated Microglia IL-1R2 is a decoy receptor that causes a block of signal transduction after IL-1 binding. By regulating IL-1R2 expression, cells can modulate inflammation in response to exogenous stimuli. It has been showed that the up-regulation of IL-1R2 in microglial cells and brain endothelial cells attenuates CNS inflammation [12,16]. ACKR2, also known as the D6 decoy receptor, scavenges various inflammatory chemokines, thus affecting the inflammatory microenvironment. In this regard it is thought that the D6 decoy receptor could be a resolving agent in the neuroinflammatory processes because of its capacity to scavenge chemokines, leading to the alleviation of inflammation in different situations, including neuroinflammatory-based neurological disorders [20]. Therefore, in our study, we verified the ability of resveratrol to modulate the expression of decoy receptor IL-1R2 and decoy receptor ACKR2 both in terms of mRNA and protein expression. The analysis of mRNA expression for both the decoy IL1-R2 receptor and the decoy ACKR2 receptor showed a significantly reduced expression of both these receptors in the microglial cells subjected to prolonged exposure to LPS in comparison to that of those cells treated with Resveratrol alone. Interestingly, in the cells exposed to LPS but pre-treated with resveratrol, there was a drastic and highly significant increase in mRNA expression for both of the decoy receptors studied in comparison to that of the cells treated with LPS alone (Figure 7A,B). These results were confirmed by the Western blotting analysis on IL1-R2 and ACKR2 protein expression. Additionally, in this case, resveratrol was able to cause a significant increase in the protein expression of both IL1-R2 and ACKR2 decoy receptors in the cells treated for 72 h with LPS that received a pre-treatment of 1 h with resveratrol in comparison to that of those cells treated with LPS alone (Figure 7C,D). All together, these results certainly confirm the already known anti-inflammatory effect that resveratrol elicits on microglial cells in case of neuroinflammation. At the same time, however, these experiments demonstrate, for the first time, the ability of resveratrol to modulate the expression of IL1-R2 and ACKR2 decoy receptors, which could represent a new potential therapeutic target especially in cases of the prolonged inflammation of the CNS. Based on the previous results evidencing that the resveratrol treatment on the LPS activated microglia responses exerts both an inhibition of pro-inflammatory mechanisms and an induction of anti-inflammatory responses [15,16,21,23], we aimed, in this study, to expand our knowledge regarding the other possible effects of this polyphenolic compound on the inflammatory responses of microglia submitted to a prolonged LPS treatment. Here, we demonstrated that resveratrol, without affecting the viability of these cells, is able to specifically interfere with the pro-inflammatory responses induced by LPS in terms of both the decreased production of IL-1β and the increased production of the IL-1β decoy receptor. IL-1β, a member of the IL-1 family, is a potent pro-inflammatory cytokine in the acute and chronic phases of inflammation, therefore, the reduced production of IL-1β after 72 h of incubation in resveratrol-treated cells demonstrates that this polyphenol could limit the amplification phase of inflammation. To analyze whether the resveratrol-treated microglia display a reduced ability to react to pro-inflammatory stimuli, we also investigated the response of the cells to LPS in terms of NO and of TNF-a release. In this regard, we detected that after 72 h of treatment, resveratrol was able to significantly reduce the production of both of these mediators. Moreover, we also demonstrated that after a prolonged incubation of microglia cells to LPS, the resveratrol treatment was able to counteract the pro-inflammatory processes down-regulating the IkB degradation, which resulted significantly reduced in comparison to that of the cells treated with LPS alone. NF-kB is considered to be the most important transcription factor involved in the inflammatory responses, thereby in the regulation of NO, TNF-α, and IL-1β [19,20,26,27]. Previously published papers have reported in other cell types [16,23,28,29] that resveratrol significantly inhibited the degradation of IκBα in microglia stimulated with LPS, as well as the subsequent iNOS expression and production of TNF-α, suggesting that resveratrol can modulate the signaling pathways triggered by pro-inflammatory stimuli, such as LPS. However, in the present study, we observed that this action of resveratrol on the production of TNF-α and the degradation of IκB-α is also evident after a more prolonged incubation time, evidencing how this compound is effective at modulating the inflammatory responses protracted over time and not only in the acute ones. In addition, we also demonstrated that the resveratrol treatment determined a significant reduction of COX-1, COX-2, and p-cPLA2, which are all mediators of pro-inflammatory responses. Cyclooxygenase exists as COX-1 and COX-2 distinct isoforms [23,24,30,31] and converts arachidonic acid (AA) released by PLA2 acting at the sn-2 position of membrane phospholipids into prostaglandins and other lipid mediators. Both isoforms are important pro-inflammatory enzyme, whose abnormal expression is a significant marker of neuroinflammation, as previously reported [24,31]. Moreover, AA plays also a key role in inflammation and neurodegenerative disorders [25,32]. In mammalians, there are the three major classes of PLA2s, secretory, calcium-independent, and calcium-dependent ones: among them, the calcium-dependent cytosolic PLA2α (cPLA2α) has received the most attention because the cPLA2-AA-COX-2 pathway is an important signaling pathway in different inflammatory paradigms and neurodegeneration [26,33]. In this regard, it has been demonstrated that the oxidative responses observed in many types of brain damage are associated with increased COX activity [27,34]. Moreover, it was reported that a treatment with COX inhibitors may significantly reduce in neuronal and microglial cell LPS- and IL-1β-induced oxidative damage [28,35]. The results of our study are in accordance with ones showing that in mouse microglial cells, the reduction of COX-2 expression observed after a resveratrol treatment could be determined by the inhibition of NF-κB activation [29,36]. Therefore, our data evidence that NF-κB pathway inhibition through the targeting of IκB phosphorylation by resveratrol ultimately may reduce a pro-inflammatory phenotype, thereby down-regulating different mediators, including COX-1, COX-2, and p-cPLA2. One aspect that is particularly important emerging from our study was the ability of resveratrol to modulate the expression of the so-called decoy receptors, such as IL-1R2 and ACKR2. IL-1R2, first identified on monocytes, neutrophils, dendritic and B cells, in both human and mice, has been reported to be largely involved in driving myeloid cells polarization, and consequently, orientating the immune response. In fact, anti-inflammatory M2 stimuli, such as IL-4, IL-13, IL-10, IL-27, and aspirin, lead to the up-regulation of IL-1R2 expression, whereas the M1 phenotype activated by pro-inflammatory molecules (such as LPS, IFNγ, and TNF-α) exhibits a down-regulation of IL-1R2 [12,16]. The modulation of IL-1R2 expression has been reported in many cell types as a way to counterbalance and limit sustained inflammation in response to exogenous stimuli. In this regard, IL-1R2 up-regulation in the microglia and brain endothelial cells reduced the brain inflammation in experimental models of IL-1β-induced neurotoxicity, as previously reported [30,31,32,37,38,39]. ACKRs are a group (four in humans) of proteins with a high degree of homology with chemokine receptors. ACKRs are chemotactic receptors; however, since they are devoid of the structural domains required to activate canonical G protein-dependent receptor signaling and chemotactic functions, they do not transduce signals through G proteins and lack chemotactic activity [33,40]. Consequently, ACKRs fail to initiate classical signaling pathways after ligand binding, playing a crucial role as regulatory components of chemokine networks in many physiological and pathological processes. Interestingly, the resveratrol treatment enhanced the expression of the anti-inflammatory IL-1β decoy receptor IL-1R2 and increased the expression of the other decoy receptor, ACKR2. IL-1R2 is the decoy receptor for IL-1; when IL-1R2 binds to IL-1β, signal transduction cannot be triggered, and consequently, the pro-inflammatory action of this cytokine is neutralized [34,41]. Therefore, the increased expression of IL-1R2 on the microglia surface indicates a reduced responsiveness of these cells to IL-1β stimulation, significantly dampening the pro-inflammatory profile. Moreover, IL-1R2 also exists in soluble form that can be rapidly shed, so the increased release of the soluble form by IL-1R2-overexpressing cells could neutralize the action of IL-1β on other cells, thus reducing the extent of the pro-inflammatory responses. The results of our pioneering work describe, for the first time, that the resveratrol treatment of the microglia exposed to a prolonged pro-inflammatory stimulus is able to counterbalance inflammatory responses through the regulation of decoy receptors. These findings suggest that the naturally occurring polyphenol resveratrol ability to drive microglial activation, thus regulating the inflammatory response, may help to explain its neuroprotective effects in several in vivo models of neuroinflammation. ## 4. Conclusions The results of the present in vitro study suggest that polyphenolic compounds, such as resveratrol, may be useful in the treatment of inflammation associated with neurodegeneration and that clinical studies may evaluate the possibility of their use as a therapeutic support strategy. The results of this study highlight the direct effects of resveratrol in the regulation of functional in vitro responses by microglial cells, therefore it would be of considerable importance to investigate the effect of this polyphenol in vivo for future clinical use also in nano-formulations or intranasal spray applications, for example, in order to overcome the bioavailability problems linked to the BBB or metabolism of endothelial cells. In light of these results, we plan to carry out further studies to clarify the modulation mechanisms of the decoy receptors underlying the neuroprotective effects of polyphenols.
# An Innovative Mei-Gin Formula Exerts Anti-Adipogenic and Anti-Obesity Effects in 3T3-L1 Adipocyte and High-Fat Diet-Induced Obese Rats ## Abstract Background: To investigate the potential anti-obesity properties of an innovative functional formula (called the Mei-Gin formula: MGF) consisting of bainiku-ekisu, Prunus mume ($70\%$ ethanol extract), black garlic (water extract), and Mesona procumbens Hemsl. ( $40\%$ ethanol extract) for reducing lipid accumulation in 3T3-L1 adipocytes in vitro and obese rats in vivo. Material and Methods: The prevention and regression of high-fat diet (HFD)-induced obesity by the intervention of Japan Mei-Gin, MGF-3 and -7, and positive health supplement powder were investigated in male Wistar rats. The anti-obesity effects of MGF-3 and -7 in rats with HFD-induced obesity were examined by analyzing the role of visceral and subcutaneous adipose tissue in the development of obesity. Results: The results indicated that MGF-1-7 significantly suppressed lipid accumulation and cell differentiation through the down-regulation of GPDH activity, as a key regulator in the synthesis of triglycerides. Additionally, MGF-3 and MGF-7 exhibited a greater inhibitory effect on adipogenesis in 3T3-L1 adipocytes. The high-fat diet increased body weight, liver weight, and total body fat (visceral and subcutaneous fat) in obese rats, while these alterations were effectively improved by the administration of MGF-3 and -7, especially MGF-7. Conclusion: This study highlights the role of the Mei-Gin formula, particularly MGF-7, in anti-obesity action, which has the potential to be used as a therapeutic agent for the prevention or treatment of obesity. ## 1. Introduction Obesity is characterized by a defective body fat storage capacity caused by a chronic imbalance of energy due to excess dietary consumption and insufficient physical activity [1]. The prevalence of obesity is still rising globally and has become a pervasive public health threat. Obesity precedes type 2 diabetes mellitus, dyslipidemia, fatty liver injury, hypertension, and cancer, and is greatly associated with a higher premature disability and mortality rate [2]. Excess calorie intake is accompanied by less energy expenditure, leading to adipogenesis in both the liver and the adipose tissue, and subsequently promoting the development of metabolic disorders [3,4]. White adipose tissue (WAT) represents a key reservoir for energy storage such as triglycerides (TG) in adipose tissue and expands via increasing individual size (hypertrophy) or number (hyperplasia) of differentiated mature adipocytes to allow adequate tissue expansion in response to high-fat dietary consumption or overnutrition [5,6,7]. The adipocyte differentiation process begins from adipocyte progenitors’ mitogenic expansion in the determination phase, and later gains the characteristics of mature adipocytes in the terminal differentiation phase. Anatomically, subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) are considered the two main types of WAT; the enlargement of the SAT and VAT enlargement are considered to mediate obesity development and its related metabolic complications [8,9,10,11]. In particular, rodents were fed a high-fat diet with the consequence of a significant imbalance in energy storage and expenditure capacity between WAT and brown adipose tissue (BAT) depots [12,13]. Evidence from animal and human studies indicates that the inability of SAT and VAT to expand when faced with dietary phytochemicals is pharmacologically beneficial to the metabolic health status of obesity [14,15,16]. Lipid metabolism is often considered a complex mechanism with the involvement of regulatory elements such as glycerol-3-phosphate dehydrogenase (GPDH), which mediate the rate-determining reaction in the synthesis of triglycerides and serve as a marker of adipogenesis [17]. Numerous studies indicated that inhibition of GPDH expression or activity can suppress lipid accumulation in 3T3-L1 adipocytes and may act as an anti-obesity target in adipocytes [17,18,19,20]. Bainiku-ekisu is the fruit juice concentrate of Prunus mume and has been proposed to have pharmacological properties that might benefit the treatment of dyspepsia and diarrhea. An in vitro study has shown that bainiku-ekisu exhibited immediate bacteriostatic activity on serious strains of H. pylori at a concentration of $0.3\%$ in 15 min [21]. Furthermore, an in vivo pilot study reported that H. pylori-positive patients received $1\%$ bainiku-ekisu solutions for 2 weeks, resulting in a slightly decreased in the urea breath test (UBT) values [22]. Yang et al. demonstrated that bainiku-ekisu has a higher total phenolic content (1.9-fold) and flavonoid content (1.4-fold) than fresh Japanese apricot juice, which may have direct effects on improving metabolic disorders, and, therefore, improve metabolic diseases such as type 2 diabetes and hyperlipidemia [23]. Black garlic, a fermented product of fresh garlic, is generated by the application of controlled high temperature under high humidity over 10 days [24,25]. A previous study suggested that fermented black garlic exhibited bioprotective properties against vascular disease through the downregulation of the MAPK pathway in a model of zebrafish vascular lesions [26]. Extracts from black garlic decreased body weight ($7.69\%$) and lumbar subcutaneous fat mass ($16.88\%$) in a high-fat diet-fed rodents model [27]. Treatment with black garlic extract was founded to increase cellular oxygen uptake and alter UCP-1-based thermogenesis in human adipose-derived stem cells [28]. Mesona procumbens Hemsl., also called Hsian-tsao, is consumed as folk medicine and is considered to have therapeutic potential for the treatment of liver disease, heat shock, and metabolic disorders. The anti-adipogenic activity of Mesona procumbens in 3T3-L1 cells was also reported to decrease lipid droplet accumulation by transcriptionally inhibiting peroxisome proliferator-activated receptor γ (PPARγ) and transcription factors CCAAT/enhancer-binding protein (C/EBP) β expressions [29]. We have previously shown that in vivo and in vitro, the Mesona procumbens Hemsl. extract could decrease xanthine oxidase activity and prevent the overproduction of serum uric acid, which is suggested to be a novel hypouricemic agent [30]. In addition, extracts rich in phenolic compounds from Hsian-tsao have been demonstrated to exhibit antioxidant properties and served as free radical scavengers [31]. Based on the accumulated experimental evidence, we have developed an innovative bainiku-ekisu-based functional combination of ethanol extracts from Prunus mume, Mesona procumbens Hemsl., and water extract from black garlic, called the Mei-*Gin formula* (MGF). However, due to the bioavailability of individual substances in the complex mixture and differences in the pharmacokinetics of the active substance, a deeper understanding of the beneficial effect of the Mei-*Gin formula* in the prevention or treatment of metabolic parameters in obesity is now an urgent need. In the present study, our aim is to investigate the efficacy of the Mei-*Gin formula* in pre-adipose cell differentiation and a high-fat diet-fed rat model. ## 2.1. Composition Analysis of the Mei-Gin Formula (MGF) MGF-1-7 capsules containing mixed extract of functional powder, consisting of bainiku-ekisu (decompress concentration process), Prunus mume ($70\%$ ethanol extract), black garlic (water extract), and Hsian-tsao ($40\%$ ethanol extract) were provided by Dr. Ming-Ching Cheng, Department of Healthy Food, Chung Chou University of Science and Technology. In brief, analysis of the phenolic content of the Mei-Gin formulas was carried out using a high-performance liquid chromatography system (L-2130 pump and L-2400 UV detector, Hitachi, Tokyo, Japan), and permeation data were recorded on a computer with the LC solution 1.25 sp1 software. Elution was carried out with two buffers: buffer A ($0.1\%$ formic acid in water) and buffer B ($0.1\%$ formic acid in acetonitrile); the flow rate was set at 1 mL/min. The absorption spectra of the samples were detected at a UV wavelength of 280 nm, and all phenolic acid identification was carried out by comparing their retention time with known reference standards [32]. ## 2.2. Cell Culture and Treatment 3T3-L1 preadipocytes (BCRC No. 60159) obtained from the Bioresource Collection and Research Center (BCRC, Food Industry Research and Development Institute, Hsinchu, Taiwan) were cultured in high glucose Dulbecco’s modified *Eagle medium* (high glucose DMEM) supplemented with $9\%$ newborn calf serum (NBCS), NaHCO3 (1.5 g/L) and $1\%$ penicillin-streptomycin (PSN) in an incubator with an atmosphere of $5\%$ CO2. For the differentiation of adipocytes, 3T3-L1 cells were cultured in the differentiation medium containing 0.5 mM 3-isobutyl-1-methylxanthine (IBMX), 1 μM dexamethasone (DEX), 1 μM insulin, 1.5 g/L NaHCO3, and $1\%$ PSN. After 4 days of incubation, the medium was changed to high glucose DMEM containing $8\%$ fetal bovine serum (FBS), 1 μM insulin, 1.5 g/L NaHCO3, and $1\%$ PSN for 2 days. The mature 3T3-L1 adipocytes were treated with appropriate doses of MGF-1-7 solution (0, 10, 25, 50, 100, and 250 μg/mL) and then incubated for 48 h. ## 2.3. Animals and Diet Six-week-old male Wistar rats were purchased from BioLASCO Taiwan Co., Ltd. (Taipei, Taiwan) and supplied with a pelletized commercial laboratory diet (Purina Lab Chow) and water ad libitum. The rats were maintained under an air-conditioned environment (23 ± 2 ℃ with $60\%$ relative humidity) with a 12 h light/dark cycle at the Experimental Animal Center of Chung Shan Medical University. All the experimental manipulations involving animals were strictly implemented according to ethical guidelines for animal experiments and were approved by the Laboratory Animals Center of Chung Shan Medical University (IACUC No. 1664). After 1 week of acclimatization, the rats were randomly divided into nine groups of 12 rats (Figure 1). Rats in the control group were fed an AIN-93G control diet ($7\%$ fat); rats in the experimental groups were fed an AIN-93G-based high-fat diet containing $32\%$ lipids ($7\%$ soybean oil and $25\%$ lard). For the positive control treatment, a health supplement containing hydroxycitric acid (HCA) and chlorogenic acid (CGA) was obtained from Taiyen Biotech Co., Ltd. (Tainan, Taiwan), which has been proven to have antibody fat accumulation effects by the Taiwan Food and Drug Administration (TFDA, License NO. A00274). Furthermore, the experimental groups were distributed into eight subgroups: (a) continued received HFD throughout the period (HFD), (b) HFD supplemented with 100 mg/kg body weight Japan Mei-Gin (low-dose Japan Mei-Gin, HFD + JMG-LD), (c) HFD supplemented with 300 mg/kg body weight Japan Mei-Gin (high-dose Japan Mei-Gin, HFD + JMG-HD), (d) HFD supplemented with 100 mg/kg body weight Mei-Gin formula-3 (low-dose Mei-Gin formula-3, HFD + MGF-3-LD), (e) HFD supplemented with 300 mg/kg body weight Mei-Gin formula-3 (high-dose Mei-Gin formula-3, HFD + MGF-3-HD), (f) HFD supplemented with 100 mg/kg body weight Mei-Gin formula-7 (low-dose Mei-Gin formula-7, HFD + MGF-7-LD), (g) HFD supplemented with 300 mg/kg body weight Mei-Gin formula-7 (high-dose Mei-Gin formula-7, HFD + MGF-7-HD), and (h) HFD supplemented with 140.6 mg/kg body weight HCA + GCA powder capsules (HFD + PC). During the experiment period, the body weight, daily feed, and water were measured and used to calculate the feeding efficiency. After 8 weeks, animals were fasted overnight, euthanized by carbon dioxide, and whole blood was collected from the abdominal aorta. The heart, liver, spleen, lung, kidney, visceral adipose tissue (perirenal fat, epididymal fat, and mesenteric fat), and subcutaneous adipose tissue (retroperitoneal fat and inguinal fat) were dissected, rinsed, weighed, and stored at −80 °C. ## 2.4. Oil Red O Lipid Staining For measuring intracellular lipid accumulation, oil red O staining was performed to determine the effect of MGF-1-7 on lipid synthesis. Briefly, mature 3T3-L1 adipocytes were harvested and then washed by PBS and fixed in $10\%$ neutral buffered formalin for 20 min at room temperature. Subsequently, cells were placed in $100\%$ propylene glycol for 3 min and then stained with a mixture of oil red O working solution (oil red O solution/water 3:2, v/v) for 60 min. The lipid droplets were visualized and photographed by microscopy (Motic AE$\frac{30}{31}$), and the oil red O was solubilized in isopropanol and measured spectrophotometrically at 510 nm. ## 2.5. Determination of Glycerol-3-Phosphate Dehydrogenase (GPDH) Activity Using a GPDH activity colorimetric assay kit (Cat No. K640-100, BioVision, Milpitas, CA, USA), mature 3T3-L1 adipocytes were harvested 48 h after MGF-1-7. Based on the manufacturer’s instructions, protein concentration was determined spectrophotometrically according to the reduced nicotinamide adenine dinucleotide (NADH) standard. GPDH activity (%) was pressed as a percentage change against control ($100\%$). ## 2.6. Statistical Analysis For all experiments, the quantitative data are expressed as mean ± SEM. The analysis of variance was followed by one-way ANOVA with Duncan’s multiple range test by using the SPSS software program, version22.0 (APSS Inc., Chicago, IL, USA). A statistically significant difference was established only if the p-value < 0.05. ## 3.1. Determination of Phenolic Contents of the Mei-Gin Formulas The content of the main phenolic compounds in MGF-1-7 was identified using HPLC analysis and as follows: MGF-1 (chlorogenic acid: 0.46, caffeic acid: 0.19, and p-coumaric acid: 0.09 mg/g), MGF-2 (chlorogenic acid: 0.26, caffeic acid: 0.24, and p-coumaric acid: 0.07 mg/g), MGF-3 (chlorogenic acid: 0.34, caffeic acid: 0.18, and p-coumaric acid: 0.08 mg/g), MGF-4 (chlorogenic acid: 0.32, caffeic acid: 0.26, and p-coumaric acid: 0.08 mg/g), MGF-5 (chlorogenic acid: 0.54, caffeic acid: 0.11, and p-coumaric acid: 0.08 mg/g), MGF-6 (chlorogenic acid: 0.70, caffeic acid: 0.19, and p-coumaric acid: 0.07 mg/g), and MGF-7 (chlorogenic acid: 0.72, caffeic acid: 0.11, and p-coumaric acid: 0.08 mg/g). The data indicated that MGF-7 had higher chlorogenic acid, MGF-4 had higher contents of caffeic acid content, and the contents of p-coumaric acid content was similar in each group. ## 3.2. Effects of Mei-Gin Formulas on Lipid Accumulation in 3T3-L1 Adipocytes To determine the effects of the Mei-*Gin formula* on intracellular lipid accumulation in adipocyte cells, the effects of serially diluted MGF-1-7 on 3T3-L1 adipocytes were visualized using oil red O staining. MGF-1-7 effectively reduced lipid accumulation in mature 3T3-L1 adipocytes. To validate the observation of reduced lipid accumulation, a TG quantification assay was performed to confirm the changes. As shown in Figure 2A,B, similar to the reduction of lipid accumulation, MGF-1-7 significantly decreased the cellular level of TG in 3T3-L1 adipocytes. The 250 μg/mL MGF-3 and -7 treatments were observed to show a significant 31.4 and $35.9\%$ reduction in TG content compared to untreated control cells, respectively (Figure 2B), indicating that in the presence of high-dose MGF-3 and -7, the intracellular level of TG levels was dramatically decreased. Additionally, a further detailed analysis of the effects of MGF-1-7 on GPDH activity is shown in Figure 2C. The alteration of intracellular GPDH activity was observed in 3T3-L1 adipocytes treated with MGF-1-7. In particular, GPDH activity was decreased in the same manner as previously observed in the intracellular TG level in cells treated with MGF-3 and -7. The results suggested that MGF-3 and -7 appeared to have the most potency in reducing 3T3-L1 adipocytes. Therefore, we used MGF-3 and -7 in subsequent experiments involving animal models. ## 3.3. Effect of the Mei-Gin Formula on Body Weight, Feed Intake, Energy Intake, Feed Efficiency, and Mass of Selected Organs in HFD-Induced Obesity As shown in Figure 3 and Table 1, the body weight during the experimental period progressively decreased among the JMG, MGF, and PC intervention groups, as compared to the HFD group. In detail, consumption of low-dose Japanese MG, high (300 mg/kg) MGF-3, and both low and high (100 mg/kg and 300 mg/kg) MGF-7 significantly decreased body weight change and weight gain compared to that of the HFD group ($p \leq 0.05$). The feed intake and energy intake of the JMG-fed rats were significantly lower than those of the HFD-fed rats. Although the feed efficiency of rats that consumed MGF-3 (300 mg/kg), MGF-7 (300 mg/kg), and HCA + GCA powder capsules (140.6 mg/kg) was significantly lower than that of the HFD group (Table 2), no significant differences were observed in the weights of the heart, spleen, lungs, and kidneys among each group. A significant reversal in increased liver weight was observed in rats that ate low (100 mg/kg) JMG, high (300 mg/kg) MGF-3, low and high (100 mg/kg and 300 mg/kg) MGF-7, and powder capsules (140.6 mg/kg) (Table 3). ## 3.4. Effect of the Mei-Gin Formula on Body Fat Mass and Adipose Tissue in Rates with HFD-Induced Obesity Rates As shown in Table 4 and Table 5, the rats fed a high-fat diet exhibited persistent higher total body fat mass compared to the counterpart ND group, and this was significantly attenuated by consuming low (100 mg/kg) JMG, high (300 mg/kg) MGF-3, both low and high (100 mg/kg and 300 mg/kg) MGF-7, and HCA + GCA powder capsules (140.6 mg/kg). Similarly, a trend of reduction in subcutaneous adipose tissue was also observed among those five groups as the result of increased retroperitoneal and inguinal adipose tissue. Although the effect of low-dose JMG on visceral adipose tissue was not statistically different, it still showed potential to lower the mass of adipose tissue at week 8. Among visceral (perirenal, epididymal, and mesenteric) adipose tissue, a significant reduction was found in perirenal and mesenteric adipose tissue after administration of low-dose JMG, high-dose MGF-3, both high- and low-dose MGF-7, and powder capsules; while the weight of epididymal adipose tissue did not show a significant difference between each group after dietary intervention. ## 4. Discussion Obesity is characterized by defective excess body fat content, including the determination of size and body fat distribution. Recently, emerging evidence has focused on dietary phenolic compounds that provide a therapeutic strategy for people with obesity, as naturally occurring plant products reduced the potential for side effects [33,34,35,36]. Concerning the risk of adverse medication reactions, various options for obesity management and treatments have been constantly conducted in various biological properties of natural phenolic compounds that exhibited a preventive or therapeutic potential to improve lipid and glucose dysregulation [37,38,39]. Existing anti-obesity medications including orlistat and sibutramine, with very modest efficacy, can cause clinically adverse drug reactions. Therefore, there is a strong need to exploit and discover naturally occurring foods and substances as safe and acceptable alternatives to designer drugs. Based on Chinese herbology, we developed a novel Mei-*Gin formula* to exert a powerful synergistic effect to target the development of obesity [40,41]. In this study, our aim was to determine the anti-obesity effect of the Mei-*Gin formula* on the 3T3-L1 cells in vitro and in vivo HFD-induced rat model by monitoring the regulation of cell lipid accumulation and adipose tissue. In vitro cell models, particularly 3T3-L1 preadipocyte differentiation and adipogenesis of 3T3-L1 preadipocytes, with the main characteristic of intracellular triglyceride accumulation, are associated with the development of obesity [42,43]. In this regard, we demonstrated that the inhibitory effects of the Mei-*Gin formula* on 3T3-L1 adipocyte differentiation were due to the downregulation of GPDH activity and this subsequently led to a reduction in cellular triglyceride production. As shown in Figure 2, including the serial tested Mei-Gin formulas (MGF-1-7), all are capable of altering the differentiation capacity of the 3T3-L1 preadipocyte into the 3T3-L1 adipocyte. On the basis of the HPLC analysis, we determined the phenolic compounds in the Mei-*Gin formula* which were most abundant in p-coumaric acid, caffeic acid, and chlorogenic acid. Among food phenolic compounds, hydroxycinnamic acid represents a major class of phenolic acid available in fruits, seeds, and vegetables [44,45]. A diet supplement of hydroxycinnamic acid can easily reach levels of 0.5–1 g or even higher in humans. Previously, evidence indicated that hydroxycinnamic acid, including p-coumaric acid caffeic acid, ferulic acid, and chlorogenic acid, can serve as the primary antioxidant [46], have powerful anti-inflammatory [47] and anti-cancer activity [48,49], and are involved in improving insulin resistance [50]. The study reveals that caffeic acid phenethyl ester effectively prevents body weight gain and the gain of epididymal adipose tissue [51]. The result of oil red O staining indicated that caffeic acid phenethyl ester significantly reduced adipogenesis in 3T3-L1 preadipocytes, which is in accordance with our in vivo results. Dietary consumption of chlorogenic acid markedly altered the plasma lipid profile and attenuated the fatty liver by increasing hepatic PPAR-α expression in hypercholesterolemic rats [52]. Furthermore, p-coumaric acid-induced activation of the AMPK pathway subsequently leads to the inhibition of adipogenesis in 3T3-L1 adipocytes [53]. To understand the synergistic effects for each of the natural plants in our new formula, different ratios of each individual plant were used to form MGF-1-7 in the context. Interestingly, quantitative results from intracellular triglycerides in 3T3-L1 adipocytes indicated that MGF-3, -4, and -7 exhibited predominant inhibitory effects on lipid accumulation, while the inhibitory capacity among Mei-Gin formulas on GPDH activity was found mostly in MGF-3, -5, and -7. Taken together, MGF-3 and -7 appeared to be the most potent in regulating the development of obesity. Therefore, we examined the effectiveness of MGF-3 and -7 to ameliorate weight gain and further identified the contribution of adipose tissues to obesity in the HFD-induced rat model. Previously, rats fed a high-fat/high-energy diet have been shown to have significantly increased body weight compared to normal diet-fed rats, and are widely used in diet-induced obesity studies [54,55]. In the rat model induced by a high-fat diet, excessive lipid accumulation in subcutaneous and visceral adipose tissue is a key feature of obesity. In the present study, we observed that the administration of the Mei-*Gin formula* in obese rats for 8 weeks significantly reduced final body weight and weight gain. Furthermore, both low- and high-dose MGF-7 effectively suppressed body weight compared to MGF-3. Importantly, diet-induced obesity is associated with lipid burden in adipose tissue, as well as in non-adipose tissue. The increased fat deposition was observed in the liver and eventually leads to weight gain in the obese animal model [36,56,57,58]. Our data reveal a similar trend of reduction of liver weight in HFD-induced rats after administration of the Mei-Gin formula. Interestingly, a similar result was shown in 3T3-L1 adipocytes in vitro and obese mice in vivo as a supplement of plant resin [59]. Recently, the complete analysis of Prunus mume extracts was reported to identify the phytochemical composition, including chlorogenic acid, lupeol, mumefural, and ursolic acid, which are proposed to have anti-cancer properties [60,61,62,63]. A pilot study conducted on 18 H. pylori-positive participants in the stomach proposed an anti-bacterial activity of bainiku-ekisu therapy [22]. Bainiku-ekisu is a Prunus mume juice concentrate and was previously reported to exhibit strong anti-bacterial activity in vitro [23]. Due to the concentration process, the phytochemical contents such as phenolic acid and flavonoids were higher in bainiku-ekisu than those in Prunus mume juice [64,65]. Herein, we are the first to verify the effects of the Mei-Gin-based plant mixture in regulating body weight gain of obese rats. High-performance liquid chromatography analysis confirmed the phenolic acid and flavonoid constituents in the thermal process and was found to increase markedly in black garlic as compared to that of fresh garlic [66,67,68]. The HPLC isolation and identification of black garlic showed variable quantities of phenolic acid, including garlic acid, vanillic acid, chlorogenic acid, caffeic acid, p-coumaric acid, and ferulic acid. Hung et al. have investigated the antioxidant activity and active components of Hsian-tsao [69]. Several phenolic acids were identified from the water extract of Hsian-tsao, including apigenin, caffeic acid, vanillic acid, and kaempferol [70]. The authors also determined that the amounts of caffeic acid and kaempferol were the highest among those phenolic acids and are considered the main functional components, and may contribute to the antioxidant properties of Hsian-tsao. The diet supplement of phenolic compounds from natural plants that have served as potent anti-obesity agents has been well documented. However, the effects of natural plants on obesity are not yet defined. The composition of the innovative plant formula was based on the hypothesis that combining the candidates of the plants with particular reference mentioned above can modulate the development of obesity in vivo and in vitro. Our data demonstrated that low-dose JMG, high-dose Mei-Gin formula, both high-and low-dose Mei-Gin formula, and HCA + GCA powder capsules effectively reduced total body fat in HFD-induced rats, which is consistent with the initial observation in body weight gain and liver weight. In addition, we identified the contribution of adipose tissue in obese rats. Along the same lines, it leads to a significant reduction in visceral (perirenal and mesenteric adipose tissue) and subcutaneous (retroperitoneal and inguinal adipose tissue) adipose compartments in HFD-induced rats compared to HFD controls. These data demonstrated that the administration of the Mei-*Gin formula* caused weight loss, which affected both visceral and subcutaneous adipose tissue. However, no differences were established regarding low-dose JMG and HFD controls. JMG is a condensed extract obtained using traditional thermal condensation methods. Accumulated studies have shown that JMG inhibits the proliferation of hepatocellular carcinoma cells [71] and improves hyperglycemia [72], which can serve as a dietary intervention as adjuvant therapy. However, phytochemical compounds such as phenolic acid have been reported to be destroyed after heat treatment at high temperatures for a long period of time [64,65]. In the present study, high-dose MGF-3 and high- and low-dose MGF-7 administration in HFD-induced rats results in a more efficient reduction in body weight gain, liver weight, and total body fat compared to JMG groups, which may be attributed to our decompression-processed Mei-Gin, which kept more bioactive components. Among each group, the optimal level of each MG formula group has been determined in the corresponding HFD-induced rat model. The results suggested that high-dose MGF-7 exerts the most potent anti-obesity activity. ## 5. Conclusions In conclusion, this is the first study to verify the effects of a Mei-Gin-based plant formula on obesity both in vivo and in vitro. MGF-1-7 reduced 3T3-L1 adipocyte differentiation and lipid accumulation by suppressing GPDH activity. Furthermore, the anti-obesity effects can reduce body fat accumulation in adipose tissue, which caused a reduction in body weight gain. Therefore, the Mei-*Gin formula* is a promising candidate for the treatment of obesity; however, further investigation on the mechanistic pathway of lipolysis, oxidation, synthesis of fatty acid, and thermogenesis in adipose tissues in the future may provide deeper insight into the molecular mechanisms underlying obesity and help develop emerging therapeutic options.
# CD24 Gene Expression as a Risk Factor for Non-Alcoholic Fatty Liver Disease ## Abstract In light of increasing NAFLD prevalence, early detection and diagnosis are needed for decision-making in clinical practice and could be helpful in the management of patients with NAFLD. The goal of this study was to evaluate the diagnostic accuracy of CD24 gene expression as a non-invasive tool to detect hepatic steatosis for diagnosis of NAFLD at early stage. These findings will aid in the creation of a viable diagnostic approach. Methods: This study enrolled eighty individuals divided into two groups; a study group included forty cases with bright liver and a group of healthy subjects with normal liver. Steatosis was quantified by CAP. Fibrosis assessment was performed by FIB-4, NFS, Fast-score, and Fibroscan. Liver enzymes, lipid profile, and CBC were evaluated. Utilizing RNA extracted from whole blood, the CD24 gene expression was detected using real-time PCR technique. Results: It was detected that expression of CD24 was significantly higher in patients with NAFLD than healthy controls. The median fold change was 6.56 higher in NAFLD cases compared to control subjects. Additionally, CD24 expression was higher in cases with fibrosis stage F1 compared to those with fibrosis stage F0, as the mean expression level of CD24 was 7.19 in F0 cases as compared to 8.65 in F1 patients but without significant difference ($$p \leq 0.588$$). ROC curve analysis showed that CD24 ∆CT had significant diagnostic accuracy in the diagnosis of NAFLD ($$p \leq 0.034$$). The optimum cutoff for CD24 was 1.83 for distinguishing patients with NAFLD from healthy control with sensitivity $55\%$ and specificity $74.4\%$; and an area under the ROC curve (AUROC) of 0.638 ($95\%$ CI: 0.514–0.763) was determined. Conclusion: In the present study, CD24 gene expression was up-regulated in fatty liver. Further studies are required to confer its diagnostic and prognostic value in the detection of NAFLD, clarify its role in the progression of hepatocyte steatosis, and to elucidate the mechanism of this biomarker in the progression of disease. ## 1. Introduction NAFLD is a clinico-pathologic syndrome that encompasses various medical entities, including simple fatty liver or simple steatosis, nonalcoholic steatohepatitis (NASH), cirrhosis, and its complications [1]. NAFLD now affects up to $25\%$ of people around the world. The highest prevalence rate is in the Middle East ($32\%$), followed by South America ($30\%$), while the lowest is in Africa ($13\%$). It also accounts for $2\%$ of total deaths [2]. The increase in NAFLD prevalence parallels the rise in obesity and is tightly associated with metabolic comorbidities (diabetes, hypertension, insulin resistance, and dyslipidemia). It also places patients at higher risk for progressive liver disease [3]. It became clear that, as with different complex multisystem disorders, NAFLD is triggered by a variety of underlying mechanisms; the most important one of them is the alterations in hepatic and extra-hepatic lipid metabolism [4]. The study of genetic factors in NAFLD is a rapidly growing field, as they determine not only the response of different individuals to excess caloric consumption, but also the resulting metabolic derangements [5]. Cluster of differentiation 24 (CD24) is a glycophosphatidylinositol (GPI)-anchored mucin-like cell surface glycoprotein, encoded by a gene located on chromosome 6. It is expressed on mature granulocytes and B cells and regulates growth and differentiation signals to these cells. Accumulating evidence showed that abnormal over-expression of this protein is a prognostic factor in many types of cancers, resulting in cancer cell growth, proliferation, and metastasis [6]. The expression of the cell surface molecule CD24 has previously been shown to identify a subset of adipocyte progenitor cells that is crucial for the reconstitution of white adipose tissue (WAT) function in vivo, as well as a particular regulator of adipogenesis in vitro [7]. Recently, CD24 has been identified as a possible biomarker for distinguishing NAFLD/NASH. It was concluded that the mRNA expression of CD24 is upregulated in the fatty liver [8]. Additionally, Feng et al., [ 2021] detected that CD24 was positively associated with NAFLD severity, and it could also differentiate mild NAFLD patients from severe NAFLD patients [9]. Therefore, the present study aimed to identify the association between gene expression of CD24 and early stage of NAFLD. ## 2. Subjects and Methods The present study is a prospective study that was carried out on 80 subjects who attended outpatient clinics of the Internal Medicine Department of Kasr Al Ainy Hospital Cairo, Egypt during the period from May 2019 to December 2020 either for general health checks or to identify and treat the complications of other metabolic disorders such as diabetes or obesity. The selected subjects were divided into two groups according to the sonographic findings of steatosis: 40 NAFLD patients with bright liver echogenicity and 40 healthy subjects with normal liver echogenicity. All cases have age ranging between 19 to 56 years old. Those with clinical, biochemical, or histological evidence of cirrhosis, those with known causes of liver disease [viral hepatitis B and C, autoimmune hepatitis, primary biliary cirrhosis, haemochromatosis or Wilson disease], those with history of current or past excessive alcohol drinking as defined by an average daily consumption of more than 20 g alcohol, drug-induced liver disease, pregnant women and patients on hormonal contraceptive drugs (oral, parenteral), hormone replacement therapy were excluded from the study. The study was approved by Medical Research Ethical Committee of the National Research Center, Cairo, Egypt (Approval No.19-001), and informed consent was obtained from all patients. All patients were evaluated by history and clinical examination and measurement of anthropometric parameters, such as weight (kg), height (m), body mass index (BMI; kg/m2), waist circumference (cm), and mid-arm circumference (cm). Body mass index (BMI) was determined by dividing weight by square height (kg/m2). BMI is calculated as weight in kilograms divided by the height in metres squared. According to WHO, People with BMI = 18.5–24.9 have normal weight, people with BMI = 25.0–29.9 were classified overweight, while people with BMI ≥ 30 kg/m2 defines obese. BMI is calculated as weight in kilograms divided by the height in metres squared. According to WHO, in adults, overweight is defined as a BMI of 25–29.9, while a BMI ≥ 30 kg/m2 defines obesity. Waist circumference (WC) was obtained from each subject by measuring at the midpoint between the lower rib margin and the iliac crest using a conventional tape graduated in centimeters (cm). Mid-arm circumference was measured as the right upper arm measured at the midpoint between the tip of the shoulder and the tip of the elbow (olecranon process and the acromium). Cases were divided according to their previous diagnosis or levels of fasting blood sugar: a fasting blood sugar level less than 115 mg/dL is considered normal or prediabetes. While, if the fasting blood sugar level is 126 mg/dL or higher, the patient was diagnosed diabetic. Complete blood count was determined using the automated hematology analyzer SF-300 (Sysmex Corporation, Japan). Additionally, liver enzymes (ALT, AST, ALP, GGT), serum albumin, prothrombin time, INR, serum creatinine, lipid profile, and fasting blood sugar were measured to all individuals according to the manufacture instructions. The reagents were purchased from Spectrum Company, Cairo, Egypt. NAFLD fibrosis score (NFS), FIB-4, and Fast score were calculated as mentioned previously by Angulo et al. [ 2007] and Calès et al. [ 2009] [10,11] to assess fibrosis of the NAFLD patients’ group. NFS score = −1.675 + 0.037 × age [y] + 0.094× BMI [kg/m2] + 1.13 × IFG/diabetes [yes = 1, no = 0] + 0.99 × AST/ALT ratio − 0.013 × platelet count [×109/L] − 0.66 × albumin [g/dL] FIB-4 score = Age [y] × AST [U/L]/platelet [×109/L] × ALT [U/L] FAST score was calculated according to Newsome et al., [ 2020] [12] as: FAST = {exp (–1.65 + 1.07 × ln (LSM) + 2.66 × 10–8 × CAP3 – 63.3 × AST–1)}/{1 + exp (–1.65 + 1.07 × ln (LSM) + 2.66 × 10–8 × CAP3 – 63.3 × AST–1)}[1] Abdominal ultrasonography was performed to all individuals using the 3.5 MHz probe of Logic 6 of a General Electric machine. ## 2.1. Liver Stiffness Measurement (LSM) and Controlled Attenuation Parameter (CAP) Fibroscan (M probe, Echosens, Paris) was carried out by an experienced examiner in all patients (with at least 6 h of fasting) in left lateral position and the median liver stiffness of the 10 successful measurements fulfilling the criteria (success rate of greater than $60\%$ and interquartile range/median ratio of <$30\%$) were noted (in kPa). The final CAP value, which ranges from 100 to 400 (dB/m), is the median of individual measurements. As an indicator of variability, the ratio of the IQR of CAP values to the median (IQR/MCAP) was calculated. The operator was blinded to the patients’ clinical data. According to the manufacturer’s instructions, in addition to previous studies, the stages of fibrosis (F0: 1–6, F1: 6.1–7, F2: 7–9, F3: 9.1–10.3, and F4: ≥10.4) were defined in kPa [13]. Moreover, steatosis stages (S0: <215, S1: 216– 252, S2: 253–296, S3: >296) were defined in dB/m [13]. ## 2.2. Sample Collection 10 mL venous blood were drawn from all study participants in the morning after a 12 h fast; a portion of the blood was collected on EDTA tube for the extraction of RNA and for the determination of routine blood pictures (CBC) by Sysmex, the automated hematology analyzer SF-300, which was produced by Sysmex Corporation, Japan. The other portion was left to clot at room temperature. Serum was separated by centrifuging for 10 min at 3000 rpm. Sera were used immediately for other biochemical investigations including aspartate aminotransferase (AST), alanine aminotransferase (ALT), bilirubin, serum albumin, fasting blood glucose, cholesterol, triglycerides, HDL-C, and LDL-C according to the manufacturer’s instructions. The reagents were purchased from Spectrum Company, Cairo, Egypt. ## 2.3. CD24 Gene Expression by Quantitative Real Time-PCR (qRT-PCR): Total RNA was isolated from whole blood using GeneJET Whole Blood RNA Purification Mini Kit (Thermo Scientific, Lithuania) following the manufacturer’s suggestions. ## 2.4. Reverse Transcription for cDNA Synthesis and Quantitative Real-Time PCR (RTqPCR) Reverse transcription (RT) was performed to obtain cDNA from 400 ng of purified RNA using the High-Capacity cDNA Reverse Transcription Kits (Applied Biosystem, Lithuania) with random hexamers according to the manufacturer’s suggestions. A value of 10 µL of the 2X-RT master mix was pipetted into each tube and then 10 µL of RNA sample was added to it and mixed well. The tubes were centrifuged to spin down the content and to eliminate any air bubbles. After that, the tubes were placed on the PCR machine (Cleaver Scientific, UK) programmed as follows: 25 °C, 10 min, 37 °C, 120 min, and 85 °C, 5 min. After detection of cDNA concentration and purity, they were stored in −20 °C until carryover quantitative real-time PCR (QRT-PCR). CD-24 gene expression for enrolled samples was quantified using PowerUp SYBR Green master mix (2×) (ThermoFisher Scientific, Lithuania). The sequences for used primers were as follows: PrimerPrimer SequenceCD24 Forward primer5′-ACC CAC GCA GAT TTA TTC CA-3′CD24 Reverse primer5′-ACC ACG AAG AGA CTG GCT GT-3′β-actin Forward primer5′-TGA GCG CGG CTA CAG CTT-3′β-actin Reverse primer5′-TCC TTA ATG TCA CGC ACG ATT T-3′ PCR amplification was carried out in 20 μL reaction volume containing 1 µL cDNA, 10 µL PowerUp SYBR Green master mix, 7 μL nuclease-free water, and 1 µL of gene-specific forward and reverse primers as mentioned in table. The reaction was run in the Rotor-Gene Q instrument, (QIAGEN). Fluorescence measurements were made in every cycle, and the thermal profile was used as the follows: The amplification program included a UDG activation at 50 °C with a 2-min hold, and a dual-lock DNA polymerase at 94 °C with a 3-min hold, followed by 45 cycles with denaturation at 94 °C for 30-s, annealing at 55 °C for 30-s, and extension at 72 °C for 30-s. The expression levels of CD-24 in tested samples were expressed in the form of ∆∆CT (cycle threshold) value, which was calculated based on threshold cycle (Ct) values, corrected by β-actin expression, with the following equation. The relative amount of CD-24 = 2–ΔΔCt; ΔΔCt = [ΔCt of cases − ΔCt of control]; [ΔCt = Ct (CD-24) − Ct (β-actin)]. The following primers were used in the quantitative real-time PCR analyses. ## 2.5. Statistical Analysis SPSS version 16.0 (SPSS Inc., Chicago, IL, USA) was used for statistical analysis with a two-side significant criterion at $p \leq 0.05.$ The clinical data were expressed as mean ± SD (continuous, normally distributed variables). Categorical data were summarized as percentages. The significance for the difference between groups was determined by using a two-tailed Student’s t-test. Additionally, qualitative variables were assessed by chi-squared χ2-test. Correlations between different parameters were performed using Pearson’s and spearman’s correlation coefficients. A receiver operating characteristic (ROC) curve was plotted to assess the diagnostic power of CD24 in NAFLD and controls, and the area under the curve (AUC) greater than 0.5 considered to be statistically significant. The probability (p) values of ≤0.05 were considered statistically significant and indicated, while $p \leq 0.05$ was considered statistically not significant and indicated NS. ## 3. Results The present study is a case-control study recruited 80 adult subjects, (28 males and 52 females). Their age ranged from 19 to 56 years. The demographic, anthropometric, clinical, and biochemical characteristics of both groups (NAFLD and controls) are summarized in Table 1. Patients with NAFLD were significantly older than controls (mean age 42.18 ± 11.1 4 y vs. 29.65 ± 6.63 y, $p \leq 0.0001$). There were more males in the control group ($45\%$), but the majority was females in the NAFLD group ($75\%$). NAFLD patients exhibited a higher mean BMI (31.8 ± 2.9 kg/m2) than the control group (23.76 ± 1.4 kg/m2) ($p \leq 0.001$). Patients with NAFLD had a higher prevalence of hypertension and diabetes mellitus in comparison to healthy controls ($p \leq 0.001$) (Table 1). Among studied NAFLD patients, $22.5\%$ had a family history of diabetes, and $30\%$ had family history of liver disease, and $62.5\%$ of NAFLD cases ($$n = 25$$) have enlarged liver size on ultrasound. The mean serum fasting blood glucose was significantly higher in NAFLD patients than that in controls (122.6 ± 40.97 vs. 96.03 ± 7.77); ($p \leq 0.001$). In addition, hemoglobin levels were lower in NAFLD cases (11.56 ± 1.4 (g/dL) than in healthy controls (12.81 ± 1.06 (g/dL), ($p \leq 0.001$). No significant difference was observed in total leucocytic count (TLC) and platelet count between the NAFLD and control groups ($p \leq 0.05$). NAFLD patients had significantly higher serum levels of aspartate aminotransferase (AST), alanine aminotransferase (ALT), alkaline phosphatase (ALP), and gamma-glutamyl transferase (GGT) compared to healthy controls ($p \leq 0.001$). On the other hand, the mean albumin level was almost normal (3.8 ± 0.38 g/dL) in the NAFLD group. There was a significant elevation in total cholesterol, triglycerides, and LDL-cholesterol among NAFLD patients compared to controls, while there was significant decrement in HDL in the NAFLD group as opposed to controls ($p \leq 0.05$). Table 2 shows clinical and biochemical characteristics of participants stratified by sex and presence/absence of NAFLD. In both sexes, participants with NAFLD were older, had a higher BMI, as well as a higher prevalence of diabetes. Levels of hemoglobin was significantly lower in female cases compared to male cases in NAFLD group ($$p \leq 0.001$$). However, ALT and AST levels were significantly higher in male NAFLD cases compared to female NAFLD casess ($$p \leq 0.009$$ and $$p \leq 0.038$$; respectively) (Table 2). The mean Fibroscan value in all NAFLD patients was 5.1 ± 0.99 (kPa), indicating that all patients had mild fibrosis with a stage less than 2. Thirty patients had fibrosis belonging to stage 0, while the rest had fibrosis stage 1. Mean Fibroscan values for cases with fibrosis stages 0 and 1 were 4.7 ± 0.67 and 6.5 ± 0.3 (kPa), respectively. There was a statistically significant difference in liver stiffness measurements in patients with stage 0 fibrosis as compared to stage 1 fibrosis ($p \leq 0.001$). In addition, there was a stepwise increase in Cap score parallel to the increase in severity of liver fibrosis ($p \leq 0.001$) (Table 3). This study showed that both NFS and FIB-4 score were similar in patients with fibrosis stages 0 and those with fibrosis stages 1 ($p \leq 0.05$). This may be due to that all cases included in our study have mild fibrosis. Additionally, performances of FIB-4 and NFS to rule in advanced fibrosis are rather inadequate, meaning that further assessment with another test is needed in case of positive results. According to the RT-PCR results, it was detected that expression of CD24 was significantly higher in patients with NAFLD than healthy controls. The median fold change in the expression of CD24 was 6.56 higher in NAFLD cases compared to control subjects (Figure 1). The present study showed higher expression of CD24 in female cases with NAFLD compared to male cases (fold change was 6.9 in females vs 4.4 in males, but without significant difference; $$p \leq 0.262$$) (Figure 2). Additionally, CD24 expression was higher in cases with fibrosis stage F1 compared to those with fibrosis stage F0, as the mean expression level of CD24 was 7.19 in F0 cases as compared to 8.65 in F1 patients, but without significant difference ($$p \leq 0.588$$). Furthermore, there was no difference in CD24 fold change between overweight patients (median fold change = 9) and obese cases (median fold change = 5.89) ($$p \leq 0.447$$) (Figure 3). Additionally, the median fold change in CD24 in diabetic cases was seven compared to 5.13 in non-diabetic cases ($$p \leq 0.609$$) (Figure 4). ## 3.1. Evaluation of the Diagnostic Accuracy of CD24 Gene Expression for Distinguishing Patients with NAFLD from Healthy Controls Figure 5 illustrates the ROC plots to assess the diagnostic accuracy of CD24 ∆CT to distinguish patients with NAFLD from healthy controls. ROC curve analysis showed that CD24 ∆CT had significant diagnostic accuracy in the diagnosis of NAFLD ($$p \leq 0.034$$). ROC curve showed the optimum cutoff for CD24 was 1.83 for distinguishing patients with NAFLD from healthy control with sensitivity $55\%$ and specificity $74.4\%$; and an area under the ROC curve (AUROC) 0.638 ($95\%$ CI: 0.514–0.763). ## 3.2. Correlation between Different Non-Invasive Fibrosis Markers and CD24 Gene Expression Table 4 shows the correlation of Kpa, CAP, FAST, NFS, and FIB-4 with CD 24 gene expression. Pearson’s correlation test showed positive significant correlation between CD24 and NFS ($r = 0.356$, $$p \leq 0.001$$). By binary logistic regression analysis, none of the examined parameters found to be significant determinant of NAFLD after adjusting the effects of potential cofounders of age, gender, suffering of diabetes, and BMI, respectively (Table 5). ## 4. Discussion NAFLD is known nowadays as the most common liver disorder in the 21st century. It is diagnosed by the presence of more than $5\%$ fat accumulation in liver cells without excess alcohol consumption or secondary causes of fat accumulation in the background. Approximately $25\%$ of the world’s adult’s population has NAFLD, and the prevalence is still increasing [13]. NAFLD may eventually deteriorate to HCC as a result of excessive lipid accumulation, liver cell damage, immune system dysfunction, which leads to scarring, and permanent liver damage [14]. In light of increasing NAFLD prevalence, early detection and diagnosis are needed for decision-making in clinical practice and could be helpful in the management of patients with NAFLD. The present study showed a significant trend of elder age with the progression of non-alcoholic fatty liver disease. This finding substantiates previous findings in the literature, which suggested that the prevalence of NAFLD increases with increasing age [15]. The present study showed that, regarding gender distribution, there were more males in the control group ($45\%$) compared to the NAFLD group ($25\%$), but the majority was females in the NAFLD group ($75\%$). These results revealed that there was no statistically significant difference between both studied groups according to gender as $$p \leq 0.061.$$ The explanation for the gender difference is different distributions of fat mass by gender, e.g., more abdominal visceral adipose tissue in male and more subcutaneous adipose tissue mass in female. Additionally, previous results showed that Hispanic women having a higher risk for NAFLD compared to men, whereas, for the non-Hispanic population, the prevalence of NAFLD is more frequent in males [16]. Additionally, Lonardo et al. mentioned that gender is one of the main cause of variation in NAFLD risk factors. They also detected that NAFLD is more common and more severe in men than women. However, it is more common in women after menopause, indicating that estrogen may be beneficial [17]. In the current study, the incidence of NAFLD has been increasing in concert with the presence of multiple metabolic disorders, such as dyslipidemia, diabetes, hypertension, and visceral obesity. As expected, the incidence of diabetes and hypertension was significantly higher in patients suffering from NAFLD. This is in good agreement with previous studies that mentioned impaired glucose tolerance as an independent risk factor for the progression of NAFLD [18,19]. According to the International Diabetes Federation (IDF), the prevalence of DM among Egyptian adults is $15.2\%$, which may be an underestimation [20]. Lonardo et al. reported that patients with T2DM had $80\%$ higher liver fat contents compared to non-diabetic patients [21]. Additionally, Lee, et al., [ 2019], mentioned that compared to the general population (around $25\%$), $50\%$ to $70\%$ of people with diabetes have NAFLD, and NAFLD severity (including fibrosis) tends to be worsened by the presence of diabetes [22]. Additionally, another study carried out on the Egyptian college students showed that around 1 in 3 had steatosis, and 1 in 20 had fibrosis. The prevalence of NAFLD in young adults was estimated to be $31.6\%$, which is perfectly in line with the $31.8\%$ prevalence rate found in a meta-analysis of numerous epidemiological studies across general Middle Eastern populations. It is known that the Middle East and North Africa region has one of the highest prevalence rates of NAFLD globally, and that Egypt ranked among the highest 10 nations with obesity prevalence. Combing both may explain our unexpected observation. In our cohort, 59 ($49.2\%$) of participants were overweight or obese [23]. NAFLD is caused by a variety of different molecular pathways and cellular alterations. The molecular pathways of NAFLD pathogenesis in the liver have been identified in several studies. The major genes linked to illness development and the underlying functional pathways are yet unknown, and whether the differentially expressed CD24 is involved in hepatic lipid metabolism is still unclear. Microarray technologies have revealed a large number of new molecular markers (DNA, RNA, and protein) in recent years. Further research is needed to confirm the clinical utility of these impending novel indicators in relation to hepatic steatosis. CD24 is one of these markers, which was recently reported by Huang et al. as a possible biomarker in the course of hepatocyte steatosis [8]. Various studies have recently discovered that CD24 expression is relatively high in many human malignancies, including HCC [24,25,26,27,28]. Additionally, CD24 overexpression has been correlated with increased invasiveness, proliferation, and metastasis [29]. It was previously identified that a subpopulation of adipocyte progenitor cells with the expression of the cell surface molecule CD24 being necessary for reconstitution of white adipose tissue function in vivo as well as being a key regulator of adipogenesis in vitro [30]. In our study, we investigated the association between CD24 gene expression and the prevalence of NAFLD. The current study found that CD24 gene expression was considerably greater in NAFLD cases compared to controls, and the normalized CD24 gene expression in NAFLD was up-regulated 6.56-fold. These findings suggest that the CD24 gene is important in the development of NAFLD. This could be related to CD24 gene expression’s impact on the immune/inflammatory response via T-cell activation [31]. Several immune cell-mediated inflammatory processes are involved in NAFLD and its progression to NASH. They also influence the generation of cytokines by necrotic liver cells [32]. This confirms the previous results detected by Feng et al., who observed the up-regulation of CD24 gene expression in the livers of HFD-induced NAFLD mice and in cultured HepG2 cells exposed to glucolipotoxicity (palmitic acid or advanced glycation end products) [9]. Up until now, the precise role and the underlying mechanisms of CD24 in NAFLD progression remain unclear. However, Huang and his colleague identified the prominent correlation between CD24 and NAFLD/NASH. They mentioned that CD24 could play a key role in one of the pathways that may cause IR and may induce NAFLD/NASH in humans including [“glycolysis/gluconeogenesis”, “p53 signaling pathway” and “glycine”, serine and threonine metabolism [8]. Additionally, CD24 expression was higher in cases with fibrosis stage F1 compared to those with fibrosis stage F0, as the mean expression level of CD24 was 7.19 in F0 cases as compared to 8.65 in F1 patients, but without significant difference ($$p \leq 0.588$$). This may be because that all cases included in the present study have mild fibrosis. This results most be confirmed by other studies based on large number of samples and on patients with severe stage of fibrosis. The changes in liver tissue-transcriptome in a subset of 25 mild-NAFLD and 20 NASH biopsies were examined in a cross-sectional study. CD24 was revealed to be one of five differentially expressed genes (DEGs) positively linked with disease severity and to be main classifiers of mild and severe NAFLD [33]. Additionally, CD24-positive cells isolated from hepatocellular carcinoma cell lines exhibited stemness properties, such as self-renewal, chemotherapy resistance, metastasis, and tumorigenicity [34]. These results indicate that CD24 may play a role in hepatocyte injury and promote regeneration during the development and progression of NAFLD. Another Egyptian study detected that CD24 polymorphism 170 CT/TT may affect the incidence of infection with CHC, as well as HCC [35]. They revealed that the P170T allele, which is expressed at a higher level than P170C, encodes a certain protein, which is responsible for the progression of chronic HCV infection by affecting the efficiency of cleavage of posttranslational GPI. Additionally, Robert and Pelletier [2018] showed that the P170T allele affects the progression of chronic HCV infection through posttranslational mechanisms [36]. Another study by Kristiansen et al. [ 2010] also suggested that CD24 SNPs are prognostic markers for hepatic carcinoma [37]. Interestingly, CD24 was also up-regulated in the NAFLD patients with type 2 diabetes than its expression in non-diabetic cases, but without significant difference. Another study carried out by Shapira et al. [ 2021] reported that CD24 may negatively regulate peroxisome proliferator-activated receptor gamma (PPAR-γ) expression in male mice. *This* gene is a regulator of adipogenesis that plays a role in insulin sensitivity, lipid metabolism, and adipokine expression in adipocytes. Furthermore, they concluded the association between the CD24 and insulin sensitivity, suggesting its possible mechanism for diabetes [38]. ## 5. Conclusions The current study found CD24 gene expression was considerably greater in NAFLD cases compared to controls. This could indicate that CD24 may contribute to hepatic steatosis, but a current study showed that it cannot be used as an independent predictor of NAFLD. Further studies are required to confer its diagnostic and prognostic value in the detection of NAFLD, as well as to clarify its role in the progression of hepatocyte steatosis in patients with advanced stage of fibrosis and to elucidate the mechanism of this biomarker in the progression of disease. However, our study is limited because of the small sample size, because all participants in this study have early stage of NAFLD, and because accurate diagnosis of liver fibrosis or hepatocellular injury are invasive and very expensive. Although abdominal ultrasonography has low sensitivity for detecting mild-NAFLD as reported in the previous literature, it is the best low-cost available non-invasive technique to detect NAFLD. Because of ethical considerations, we did not rely on the liver biopsy for diagnosis, as none of the patients had clinical manifestations. Moreover, the studied patients considered themselves healthy and refused to undergo further invasive investigations, including pathological examinations via liver biopsy to detect fibrosis.
# The Influence of Ethnicity on Survival from Malignant Primary Brain Tumours in England: A Population-Based Cohort Study ## Abstract ### Simple Summary Previous reports using broad ethnic group classifications have suggested that patient outcomes may vary. This study examined survival differences in malignant primary brain tumours of various morphologies between well-recorded and detailed ethnic groups for the whole of England. An ethnic difference in brain tumour survival was found with patients of an Indian background, Any Other White, Other Ethnic Group, and Unknown/Not Stated Ethnicity Groups having better one-year survival than the White British Group, following adjustment for known prognostic factors. By investigating the ethnic variations associated with better brain tumour survival, we may begin to better understand any ethnic inequalities that exist and possibly identify subgroups of patients that could benefit from personalised medicine. ### Abstract Background: In recent years, the completeness of ethnicity data in the English cancer registration data has greatly improved. Using these data, this study aims to estimate the influence of ethnicity on survival from primary malignant brain tumours. Methods: Demographic and clinical data on adult patients diagnosed with malignant primary brain tumour from 2012 to 2017 were obtained ($$n = 24$$,319). Univariate and multivariate Cox proportional hazards regression analyses were used to estimate hazard ratios (HR) for the survival of the ethnic groups up to one year following diagnosis. Logistic regressions were then used to estimate odds ratios (OR) for different ethnic groups of [1] being diagnosed with pathologically confirmed glioblastoma, [2] being diagnosed through a hospital stay that included an emergency admission, and [3] receiving optimal treatment. Results: After an adjustment for known prognostic factors and factors potentially affecting access to healthcare, patients with an Indian background (HR 0.84, $95\%$ CI 0.72–0.98), Any Other White (HR 0.83, $95\%$ CI 0.76–0.91), Other Ethnic Group (HR 0.70, $95\%$ CI 0.62–0.79), and Unknown/Not Stated Ethnicity (HR 0.81, $95\%$ CI 0.75–0.88) had better one-year survivals than the White British Group. Individuals with Unknown ethnicity are less likely be diagnosed with glioblastoma (OR 0.70, $95\%$ CI 0.58–0.84) and less likely to be diagnosed through a hospital stay that included an emergency admission (OR 0.61, $95\%$ CI 0.53–0.69). Conclusion: The demonstrated ethnic variations associated with better brain tumour survival suggests the need to identify risk or protective factors that may underlie these differences in patient outcomes. ## 1. Background Each year, over 5000 new cases of primary brain tumours are diagnosed in the United Kingdom (UK) [1]. In particular England has become both more multicultural in recent decades [2] and seen a steady increase in the incidence of malignant primary brain tumours [3]. One study considering broad ethnic groups found higher incidence rates of 4.8 per 100,000 population from 2001 to 2007 for people from White ethnic group compared to those from South Asian, Black, and Chinese ethnic groups (respective rates of 3.1, 2.8, and 2.7 per 100,000 population) [4]. English population-based studies have reported ethnic differences in the incidence of most cancers with individuals from non-White groups generally having a lower cancer risk than the White Group [5,6]. Survival for the four common cancers has been widely reported [6], but not examined in detail for brain tumours using the well-defined ethnicity information now available. A small study of high-grade gliomas in South-East England 2000–2009 [7], reported that patients of White and Not Known ethnicities had the worse survival for all tumour groups after adjusting for sex, age, morphology, socio-economic deprivation, and co-morbidity. The improved and detailed National Health Service (NHS) data on ethnicity captured by the National Disease Registration Service, which is part of NHS England, provided an opportunity to explore the impact of ethnicity on brain tumour survival. This resulted from major efforts across the NHS to increase self-reporting of this variable. With data collected on over 300,000 cancer cases in England each year, this is also the first English study to consider the more detailed classifications for malignant primary brain tumours including all gliomas, primary central nervous system lymphoma (PCNSL), as well as unclassified malignant brain neoplasms. It aims to examine the possible effect of ethnicity on the route or pathway taken to diagnosis and of receiving optimal treatment. A better understanding of any ethnic inequalities in brain cancer could potentially lead to improved treatment or services for these patients. ## 2.1. Study Population Data on all adult patients diagnosed with a malignant primary brain tumour during 2012–2017, who are resident in England and registered with a general practitioner (GP), were extracted from the English cancer registration data. ## 2.2. Selection of Cases Cases for this study were identified using the International Classification of Diseases [version 10] (ICD-10) tumour site C71. For those with PCNSL, ICD-10 code site was used along with the morphology codes for lymphoma. Other inclusion criteria were cases having a complete tumour registration and known sex. The brain tumour morphological subtypes considered in this study were based on the 2016 WHO Classification of Tumours of the Central Nervous System [8]. WHO updated this classification in 2021; however, the changes have minimal effect on the analysis of this data. Due to sample sizes, histological tumour subtypes were grouped as follows: glioblastoma, anaplastic astrocytoma, astrocytoma NOS, oligodendroglioma, PCNSL, malignant glioma, and unclassified malignant. Data on all tumours were extracted from the English cancer registration irrespective of their pathological confirmation—gliomas without a specified classification or as unclassified malignant neoplasms were included. In addition, glioblastomas with a pathological confirmation were included but tumours of benign, uncertain, and metastatic nature were not included. Molecular data are not available for this study cohort. Inpatient hospital episodes statistics (HES) data were linked to the cancer registration data from 2012. These records include ethnicity data that are almost always self-reported upon admission to NHS hospitals. The categories of ethnicities were as follows: White British, Bangladeshi, Indian, Pakistani, Chinese, Black African, Black Caribbean, and Unknown/Not Stated, and due to small numbers in these groups—White Irish and Any Other White were grouped together as Any Other White, and Mixed Ethnic Groups and Any Other Ethnic Group were grouped as Other Ethnic Group. Socio-economic deprivation was measured using the income domain of the index of multiple deprivation (IMD) 2015, divided into quintiles across England and Wales, and assigned to cases using postcode of residence at diagnosis. Charlson co-morbidity score was based on conditions occurring within one year of the cancer diagnosis date. The conditions were weighted according to their severity and scores were grouped as 0 (where none were recorded), 1, and 2 or more. Route to *Diagnosis is* defined as the sequence of interactions between the patient and the NHS, leading to a cancer diagnosis [9]. This is identified using an algorithm linking various sources based on the setting of diagnosis, and the pathway and referral route into secondary care. Information on surgical resections, chemotherapy, and radiotherapy treatments received within the first 18 months following diagnosis were also extracted. Treatment options were categorised to reflect clinical practice as: radiotherapy only, chemotherapy only, surgical resection only, all three treatments given as surgical resection followed by radiotherapy and chemotherapy (optimal treatment), radiotherapy plus chemotherapy, surgical resection plus radiotherapy, surgical resection plus chemotherapy, and no treatment. Surgical resections did not include cases with biopsies only. ## 2.3. Data Analysis We first extracted 27,934 records, cleaning the dataset to exclude duplicated cases, those without the required brain tumour morphology, with unknown vital status, or who were registered by death certificate only (DCO) (Figure 1). Survival time was calculated from the date of diagnosis until date of death with a survival period of up to one year. To retain 145 patients who died on their date of diagnosis, we added half a day to their survival time. The final study population included 24,319 cases. Initially, we examined the distribution of patients by demographic factors (age, sex, area of residence, and socio-economic status), co-morbidity, tumour morphology, route to diagnosis, and treatment factors. Univariate and multivariate Cox proportional hazards regressions were then used to estimate hazard ratios (HR) and their $95\%$ confidence intervals ($95\%$ CI) for the survival of each ethnic group up to one year following diagnosis. The follow-up period ended on 31 December 2018. χ2 Tests estimated the p-values for trend and heterogeneity, excluding unknown categories. We then carried out a sensitivity analysis in which each variable was adjusted to identify how much variation it contributed to the model, and as a result we finally focused the analysis on age, sex, co-morbidity, socio-economic deprivation, tumour morphology, route to diagnosis, and treatment received. Due to the high fatality of malignant primary brain tumours, cancer-specific survival was not studied, as this is similar to overall survival. Logistic regression was used to generate odds ratios (OR) (and their $95\%$ CI) for each ethnic group of [1] being diagnosed with pathologically confirmed glioblastoma, [2] being diagnosed during a hospital stay that included an emergency admission, and [3] receiving optimal treatment (surgical resection followed by radiotherapy and chemotherapy). ORs were adjusted for age, sex, socio-economic deprivation, co-morbidity, morphology, route to diagnosis (patient’s pathway to diagnosis), and treatment. All analyses were performed using Stata Software, version 16 (StataCorp, TX, USA). ## 2.4. Ethical Approval Data for this study were collected and analysed under the National Disease Registries Directions 2021, made in accordance with sections 254[1] and 254[6] of the 2012 Health and Social Care Act. Further ethical approval for this study was not required per the definition of research according to the UK Policy Framework for Health and Social Care Research. ## 3. Results Data from 24,319 patients with a malignant primary brain tumour diagnosed between 2012 and 2017 in England were included. Table 1 displays the distribution of patient, tumour and clinical characteristics, and univariate and mutually adjusted HRs. Brain tumour diagnosis increased with age, peaking at 65–74 years with most patients being men ($58.0\%$ $$n = 14$$,094). In absolute numbers, it was more frequent in people living in Southeast England, an area that is highly populated and more ethnically diverse. Overall, the most aggressive morphology, glioblastoma, was the most common type ($60.7\%$ $$n = 14$$,768). The Kaplan–*Meier analysis* for brain tumour morphology (Figure 2) demonstrates glioblastoma as having a very high mortality, followed by malignant glioma ($7.0\%$ of all cases, $$n = 1709$$) and unclassified malignant tumours ($11.1\%$ of all cases, $$n = 2707$$) (log-rank test, $p \leq 0.001$). Over one half of cases ($53.2\%$ $$n = 12$$,926) were diagnosed during a hospital stay that included an emergency admission, with most patients receiving either the optimal treatment ($23.0\%$ $$n = 5585$$), or no treatment ($34.9\%$ $$n = 8483$$). In the univariate analysis, each of the covariates was correlated with survival. The effects of age, sex, and co-morbidity were attenuated in the mutually adjusted analyses. Almost all patients ($95.6\%$) were recorded as having a known ethnicity. The most common ethnic group representing $85.5\%$ ($$n = 20$$,795) of the patients was the White British Group, followed by $4.2\%$ ($$n = 1018$$) from the Any Other White Group and $2.8\%$ ($$n = 674$$) from Other Ethnic Group. The more specific ethnic groups were less common with $1.3\%$ ($$n = 321$$) of patients defining themselves as Indian, $0.8\%$ ($$n = 186$$) as Pakistani, and less than $0.4\%$ as Bangladeshi ($$n = 30$$), Chinese ($$n = 37$$), Black African ($$n = 84$$), and Black Caribbean ($$n = 94$$) (Table 2). The univariate model for ethnicity showed a survival difference and the mutually adjusted model demonstrated that patients with Other Ethnic Group and Unknown/Not Stated Ethnicity had a $18\%$ and $23\%$ decreased risk of death from any cause, respectively, compared to the White British Group. In a sensitivity analysis, the association of survival with age seemed to disappear in most non-white ethnic groups. This could be explained by the younger age of these groups leading to a lower median age at diagnosis than for the White British population (Table 2). The effect on survival in the Unknown/Not Stated Group was less sensitive to statistical adjustment by age, as the median age was older than for the White British Group. After fully adjusting for age, sex, co-morbidity, socio-economic deprivation, tumour morphology, route to diagnosis and treatment received, patients from the Indian Group (HR 0.84, $95\%$ CI 0.72–0.98), Any Other White (HR 0.83, $95\%$ CI 0.76–0.91), Other Ethnic Group (HR 0.70, $95\%$ CI 0.62–0.79) and Unknown/Not Stated Ethnicity (HR 0.81, $95\%$ CI 0.75–0.88), had better one-year survivals than the White British Group (Table 3). There was no difference between the White British Group and the remaining Bangladeshi, Pakistani, Chinese, Black Caribbean, and Black African Ethnic minority groups. The ethnic difference in survival was further explored by investigating whether there was any interaction between ethnicity and glioblastoma diagnosis, route or pathway to diagnosis, and optimal treatment received (Table 4). The Any Other White Group were more likely to be diagnosed through a hospital stay that included an emergency admission (OR 1.16, $95\%$ CI 1.02–1.33). The Other Ethnic Group were nearly a third more likely to receive the diagnosis of glioblastoma (OR 1.28, $95\%$ CI 1.04–1.56) than the White British Group. However, individuals with Unknown/Not Stated Ethnicity had the most favourable prognosis and were less likely to be diagnosed with a glioblastoma (OR 0.70, $95\%$ CI 0.58–0.84), less likely to be diagnosed through a hospital stay that included an emergency admission (OR 0.61, $95\%$ CI 0.53–0.69), and more likely to receive the optimal treatment option for their other-than-glioblastoma diagnosis (OR 0.39, $95\%$ CI 0.31–0.49). ## 4.1. Main Findings This study of 24,319 people residing in England and diagnosed with a brain tumour between 2012 and 2017 shows better one-year survival for patients from Indian, Any Other White, Other Ethnic Groups, and Unknown/Not Stated Ethnic Groups than for the White British Group (HR 0.84 ($95\%$ CI 0.72–0.98), HR 0.83 ($95\%$ CI 0.76–0.91), HR 0.70 ($95\%$ CI 0.62–0.79), and HR 0.81 ($95\%$ CI 0.75–0.88), respectively). The survival analysis was adjusted for age, sex, co-morbidity, socio-economic deprivation, tumour morphology, route to diagnosis, and treatment received. Individuals with Unknown/Not Stated Ethnicity had the best prognoses and as a group were less likely be diagnosed with glioblastoma or to be diagnosed through a hospital stay, including an emergency admission. ## 4.2. Comparisons to Other Findings In comparison to the smaller study by Ratneswaren et al. [ 2014], which was limited to high-grade glioma patients living in South East England [7], our current study was able to incorporate additional factors that may influence the impact of ethnicity on survival. In this larger national dataset, the heterogenous ethnicities were categorised into better defined groups for a precise analysis. We also found the Indian and Other Ethnic Group had a better survival than the White British Group. However, we identified the Unknown/Not Stated Ethnic Group having a better one-year survival than the White British Group, in contrast to the reverse finding in the earlier study. This could be due to a higher proportion of unknown ethnicity data, which was $21.7\%$ compared to only $4.4\%$ in the current study. US population-based studies have also reported racial and ethnic variations in brain tumour incidence and survival. Most have shown that Caucasian people have poorer survival outcomes compared to Black/African Americans and Asian/Pacific Islander Americans [10,11,12,13,14,15]. Other work, however, has reported that African Americans have an increased risk of death from malignant brain tumour compared to Caucasians and other race and ethnicities [16], which was explained further by an interaction between race and surgery type. ## 4.3. Interpretations and Implications In this study, we have demonstrated that the White British Ethnic Group has a poorer survival compared to other ethnic groups. An English paper by Maile et al. [ 2016] has reported incidence data broadly similar to the US finding that patients from White Ethnic Groups were significantly more likely to develop glioblastoma than other racial/ethnic groups [4,17]. They did not evaluate survival; however, their results could help explain the association between White British ethnicity and a higher risk of mortality from high-grade glioma. Increasing age is known to be a poor prognostic factor for patients with malignant brain tumour [17]. The demography of ethnic minorities in England reflects the fact that people from these groups are younger and congregate in major cities, such as London, compared to other areas of England. A cohort study from the US also identified that patients of Hispanic background were diagnosed at a younger age compared to non-Hispanic Whites and had an improved overall survival [18]. A report by The King’s Fund suggested that, overall, people from ethnic minorities have poorer access to UK healthcare compared to the White British Groups [19], and this could correlate with fewer individuals from these being registered with the NHS. As a result, the probability of White people being diagnosed with a glioma could be increased due to their greater use of diagnostic tools, even from a young age, and therefore, they have a greater risk of ionising radiation exposure [20]. Since our finding of better survival for patients from Any Other White and Unknown/Not Stated Ethnic *Group is* new, it needs further exploration. One explanation could be that these individuals travel to their countries of origin for better healthcare and social support following their diagnosis, which could mean that their deaths abroad were not formally registered in the English system [21]. Brain tumours are considered difficult to diagnose [22], as these cancers tend to [1] involve 3 or more GP visits before diagnosis [23,24] and [2] are likely to present as an emergency [9,25]; both could lead to poorer outcomes [9]. The potential impact of family support on outcome might also differ between ethnic groups. People from minority ethnic groups, particularly those from Asian backgrounds, are more likely to be surrounded by extended families compared to the nuclear family structure, typical of the White British Group; it is possible that extended families may be more likely to recognise subtle signs of a brain tumour, including neuro-cognitive changes, as well as possible recurrences and encourage earlier diagnosis. The current standard therapy for gliomas consists of surgical resection followed by adjuvant chemotherapy and radiation, prolonging median overall survival to 15 months for glioblastomas [26,27], and is represented in this study by the optimal treatment option. From our results, we demonstrated that the Unknown/Not Stated Ethnic Group are less likely to receive optimal treatment, but that could be due to the lower chance of being diagnosed with a glioblastoma. Studies investigating possible factors explaining brain tumour occurrence have identified genes that could be associated with glioma development and tumours carrying the worst prognosis [28,29]. The presence of such genes and their specific alterations could perhaps explain the differences in prognosis by ethnicity. Epigenetic age acceleration, which is the difference between age predicted by DNA methylation and chronological age, has been linked with many cancers [30]. A recent study by Crimmins et al. [ 2021], which examined epigenetic clocks to evaluate a linkage with race/ethnicity, found that the majority of clocks indicated slower epigenetic ageing among Hispanic and African American individuals compared to White individuals [31]. The reports of differing incidence and survival by racial/ethnic groups, make it important to explore possible genetic alterations and variation in signalling pathways to identify and compare polymorphisms between ethnic minority and White individuals [13]. For example, one study found a $42\%$ reduction of risk of glioma in patients with a history of diabetes [32], and a recent meta-analysis confirmed this inverse relationship where elevated blood sugar, or a previous history of diabetes, are inversely associated with risk of glioma [33]. In England, people of Black (African and Caribbean) and South Asian (Bangladeshi, Indian, and Pakistani) backgrounds are at higher risk of developing type 2 diabetes from a younger age compared to those of a White background [34], and this could possibly be associated with the decreased glioma mortality observed here. The better survival for Indian individuals after adjusting for other factors was of particular interest and may suggest that there are other more specific influencing factors. As there were no significant differences between Indian and White British Groups in terms of patient characteristics, tumour morphology, and route or pathway to diagnosis, further explanation is needed to justify the difference when treatment received is added to the Cox proportional regression model. Due to curcumin’s anti-tumoural effects on glioma cells in preclinical in vitro and in vivo [35], we speculate that an Indian diet that usually includes curcumin might play a role in their prolonged brain tumour survival, perhaps linked to a better response to treatment. A recent study in India evaluated the molecular biomarkers of brain tumours in Indian patients [36] and reported a high prevalence of isocitrate dehydrogenase 1 (IDH1) mutation in astrocytoma and glioblastoma in these patients [37]. The presence of IDH1 mutation correlates with a survival benefit and is more common among glioblastomas progressing from a lower grade glioma compared with 5–$10\%$ of de novo glioblastomas [38,39]. Consequently, the association between individuals of an Indian background and improved survival could be related in some way to this mutation, in addition to, or perhaps independent of, the therapeutic potential of curcumin. Obtaining detailed information on molecular, genetics, and lifestyle or environmental factors could enable us to further compare outcomes with other populations. ## 4.4. Strengths and Limitations The extended period of time covered by this study, along with the greatly improved cancer registration data in recent years for England, have meant that more detailed ethnic groups can be analysed. The previously reported NCIN incidence data for the years 2002 to 2006, and other related studies, found a quarter of cancer patients had unknown (i.e., missing) ethnicity information [4,6,7]. While cancer registration acknowledges that ethnicity data may not be self-reported and may possibly have been derived from already held information, the very large increase in completeness means we can be confident in these analyses. In these new data, only $4\%$ of patients had unknown/not stated ethnicity information and were analysed separately to get a better understanding of this individual group. Our study also considered the importance of analysing more defined ethnic groups, as significant heterogeneity of risk for many cancers can be seen particularly among those from Black and South Asian Groups [4]. Another strength of this study is the adjustment for many prognostic variables that could vary by ethnicity and prognosis. However, other and unknown factors could still be relevant including patient’s performance status, tumour location within the brain, extent of tumour excision, and more interestingly, molecular biomarkers and genetic information. Our study had some limitations. Reporting patterns of incidence for the histological subtypes by ethnicity would have strengthened the study; however, this was restricted by the small number of patients in each subgroup. In addition, limited numbers of patients in the mixed ethnic groups and those from a White Irish background meant they had to be combined within Any Other Ethnic and Any Other White Groups respectively. Additionally, cancer registration did not collate information that could possibly influence survival, such as recurrences, glioblastoma progressing from a low-grade glioma, or biopsy-only—and hence, we observed a high proportion of patients ($34.9\%$) who had no surgical resection but could possibly just had a diagnostic biopsy. ## 5. Conclusions To obtain a better understanding of potential ethnic differences in malignant primary brain tumour survival, we carried out a detailed evaluation of factors, including age at diagnosis, sex, co-morbidity, socio-economic deprivation, histologic tumour subtype (which is correlated with tumour grade), route or pathway to diagnosis, and treatment options that might affect prognosis. After controlling for these variables, we found that patients from Indian, Any Other White, Other Ethnic Groups, and Unknown/Not Stated Ethnic Groups, had better one-year survival compared to the White British Group. To determine whether biological, behavioural, or clinical factors are driving these survival differences, more data on patients’ clinicopathological characteristics are therefore needed. This will help us better understand any ethnic inequalities in brain cancer and identify improvements to the health service for specific groups.
# Sleep-Disordered Breathing among Saudi Primary School Children: Incidence and Risk Factors ## Abstract This study aimed to identify the incidence and risk factors of sleep-disordered breathing (SDB) using an Arabic version of the pediatric sleep questionnaire (PSQ). A total of 2000 PSQs were circulated to children aged 6–12 years who were randomly selected from 20 schools in Al-Kharj city, Saudi Arabia. The questionnaires were filled out by the parents of participating children. The participants were further divided into two groups (younger group: 6–9 years and older group: 10–12 years). Out of 2000 questionnaires, 1866 were completed and analyzed ($93.3\%$ response rate), of which $44.2\%$ were from the younger group and $55.8\%$ were from the older group. Among all the participants, a total of 1027 participants were female ($55\%$) and 839 were male ($45\%$) with a mean age of 9.67 ± 1.78 years. It showed that $13\%$ of children were suffering from a high risk of SDB. Chi-square test and logistic regression analyses within this study cohort showed a significant association between SDB symptoms (habitual snoring; witnessed apnea; mouth breathing; being overweight; and bedwetting) and risk of developing SDB. In conclusion: habitual snoring; witnessed apnea; mouth breathing; being overweight; and bedwetting strongly contribute the to development of SDB. ## 1. Introduction Many airway dysfunctions, including obstructive sleep apnea (OSA) and primary snoring, are caused by sleep-disordered breathing (SDB) [1]. OSA is a severe form of SDB characterized by upper airway obstruction, either partial or intermittent, that interrupts the normal sleeping pattern [2]. Among children, SDB is linked with growth retardation, behavioral problems, disturbances in cognitive development, failure to thrive, and attention deficit/hyperactivity disorder [2,3]. The myriad of SDB symptoms in children also include abnormal breathing, snoring, sweating, aggressiveness, irritability, hyperactivity, sleepiness, excessive fatigue, memory impairments, and poor school performance, among many others [1,2,3,4,5]. Furthermore, the craniofacial morphology associated with SDB in children includes retrognathic mandible, increased lower facial height, narrow maxillary arch with a high vault, posterior crossbite, anterior open bite, and restriction in the upper airway space [5,6]. The prevalence of pediatric OSA ranges from 1–$6\%$, while habitual snoring varies by definition and ranges from 4–$17\%$ [1,7,8,9,10]. A previous study in southern Italy found that $4.9\%$ school children were possessed habitual snoring and among them only $1\%$ had OSA [11]. Another Turkish study stated that the prevalence of 6–13 years children’s snoring habit is $7\%$ [12]. Moreover, $11.4\%$, $12\%$ and $27.6\%$ prevalence rates of snoring habit were observed in the Indian, Chinese and Brazilian populations, respectively [13,14,15]. In Saudi Arabia, the prevalence among adults and children leans toward the higher end of the range reported in the literature [16,17,18]. However, the majority of children with OSA remain undiagnosed [19]. In order to diagnose OSA in children, nocturnal sleep-based polysomnography (PSG) is widely used and is considered the gold standard to diagnose OSA [8]. Regardless of its diagnostic advantages, PSG presents challenges related to its cost, time, complexity, and availability/access which necessitates the use of an efficient screening test that may prompt early detection and treatment [9,19,20]. As dentists and selected dental specialists obtain radiographic x-rays regularly and examine children daily, they could effectively identify the children who are at risk of SDB [9,21]. This is especially true given the previously mentioned association between SDB and craniofacial growth and development, as well as the proven positive responses to various treatment modalities ranging from interceptive orthodontics to orthognathic surgery [22,23,24]. Although thorough clinical history and physical examination are an integral part of best practice in any healthcare profession including assessment of SDB, they may not be sufficient to identify children suspected (or at high risk) of OSA [19]. It is important to consider efficient, reliable, and accurate screening tests for timely diagnosis and management (including referral) of children with SDB since it might prevent associated comorbidities [20]. Pediatric sleep questionnaires have been developed for the screening of SDB in children and adolescents, and epidemiological studies have shown them to be clinically significant and relevant [25,26]. As subjective parent report tools, sleep questionnaires are apprehensive about the risk factors and sign symptoms of SDB and OSA [27]. Among these questionnaires, the pediatric sleep questionnaire-22 (PSQ-22) has been validated in groups of referred snoring children and controls, showing excellent specificity and sensitivity for identifying children with OSA [2]. Therefore, this study aimed to determine the incidence of SDB and identify related risk factors among primary school children in Al-Kharj, Saudi Arabia. ## 2. Materials and Methods This was a cross-sectional study that was conducted from September 2018 to December 2018 in Al-Kharj city, Saudi Arabia. Al-Kharj city is situated in the Riyadh province of Saudi Arabia with about 376,325 residents (https://www.stats.gov.sa (accessed on 21 November 2017)). Ethical approval was obtained from the College of Dentistry Research Centre at Prince Sattam Bin Abdulaziz College of Dentistry (Registration No: 1439-03-003). In addition, permission to conduct this research was also obtained from the authority of the Ministry of Education. A stratified randomization technique was used to list the schools to be included in this study to ensure the proper sample representation in the Al-Kharj. A free online randomization software (http://www.randomization.com (accessed on 21 November 2017)) was used to select 20 schools (10 boys’ schools and 10 girls’ schools). A formal letter was sent to the principal of each school to obtain permission by explaining the purpose of the study. Once the school authorities agreed to participate in the current research, they were requested to provide a list of primary school students with an assigned number corresponding to each student. Then, a randomization table was used to select the students. A required sample size of 454 subjects was estimated based on a statistical power calculation described by Pourhoseingholi and colleagues [2013] [28], considering 0.20 non-responses, a confidence level of $95\%$, a precision of 0.04, and $20\%$ prevalence SBD as reported in Saudi-based literature using PSQ [17,18]. Each child was given a folder in order to obtain permission from the parents which included: [1] a cover page explaining the aim of the study, the significance of the study, and the confidentiality measures taken to protect collected information, [2] a consent form, and [3] the PSQ. Parents/guardians were asked to observe their child’s sleep pattern for one week before filling out the PSQ to improve response accuracy. Children aged 6–12 years who presented a consent form signed by their parents were included in the current study. The participants were divided into two groups by age: the younger group (age 6–9 years) and the older group (age 10–12 years). A total of 2000 folders containing a cover letter, a consent form, and the PSQ were distributed to randomly selected children, and the folders were collected a week later. The PSQ was previously validated by Chervin et al. [ 2000]; therefore, no validation was required for this study. Moreover, the Arabic version of PSQ was validated by Baidas et al. [ 2019]; therefore, the Arabic form of PSQ was used in this study in order to explain the PSQ properly by the parents [17]. The PSQ contains 22 items, and each item consists of three options to respond to with the following options: ‘yes’ = 1; ‘no’ = 0; and ‘don’t know’ = missing. If participants scored ≥8 items to ‘yes’, they would be considered at high risk of SDB, whilst if they scored <8 items to ‘yes’, they would be considered at a low risk of SDB. ## Statistical Analysis Statistical analysis was performed using SPSS software version 28 (IBM Corp. Armonk, NY, USA). The demographic of the children and the prevalence rate of SDB was assessed with descriptive statistics. A Chi-square test was performed to identify the differences in demographic variables and SDB symptoms related to the risk factors. For the Chi-square test (2 × 2), demographic variables were dichotomized as (male/female) for gender and (older/younger) for age, when the test was performed to identify differences in being at high risk of developing SBD (yes/no). Moreover, possible risk factors for the SDB were assessed by binary logistic regression. The p-value was set to <0.05 as statistically significant. ## 3. Results A total of 1866 parents out of 2000 agreed to participate and complete the questionnaire ($93.3\%$). The mean age of the participants was 9.67 ± 1.78 years. Among all the participants, a total of 1027 children were female ($55\%$) and 839 were male ($45\%$). The younger group and older group consisted of a total of $44.2\%$ and $55.8\%$ of participants, respectively. The outcome of the PSQ scoring among all the participants was presented in Table 1. Based on the PSQ scores, a total of 243 children ($13\%$) were categorized as high risk of SBD. Table 2 showed that the most prevalent symptom of SBD is mouth breathing ($14.4\%$) and the least prevalent symptom is witnessed apnea ($6.6\%$). The Chi-square test showed that there were significant differences between gender (females/males) and age (younger/males) in relation to being at high risk of developing SBD ($p \leq 0.05$). Moreover, there was a significant difference between high and low-risk children concerning habitual snoring, mouth breathing, witnessed apnea, being overweight, and bedwetting (Table 2). Table 3 presented that binary logistic regression exhibited no significant association with gender in terms of developing SDB. However, the younger age group is significantly associated with the risk of developing SDB. The risks of developing SDB were 1.43 times higher in younger children compared to older children. In addition, children with habitual snoring, witnessed apnea, mouth breathing, being overweight, and bedwetting were at 8.9 times, 2.15 times, 6.6 times, 4.57 times, and 4.81 times higher risk of developing SDB, respectively (Table 3). ## 4. Discussion The current study determined the incidence of SDB and identified related risk factors among primary school children in Al-Kharj, Saudi Arabia with the Arabic version of PSQ, which is the most used pediatric sleep questionnaire [17,27]. The original PSQ was translated into the Arabic language by Baidas et al. [ 2018], which assessed 1350 Saudi children, with $91\%$ of the questionnaire reporting good concordance [17]. The risk of undiagnosed and untreated pediatric OSA could result in significant medical comorbidities including, but not limited to, cardiovascular, cognitive, metabolic, and growth hormone dysfunction [29]. The lack of awareness of pediatric SDB is one of the main barriers for families to seeking proper care, which starts with a formal diagnosis by a sleep physician using PSG followed by proper treatment [1]. This is the first population-based study in Al-Kharj which determined the incidence of SDB among school-going children. The response rate of the current study is $93.3\%$ from a total of 1866 participants which is relatively large compared to previous studies [17,30], and somewhat comparable to a recent Saudi study [13]. Such a high response rate ($93.3\%$) might be attributed to the way the questionnaire was distributed through school principals who are figures of authority at their schools. The prevalence of SDB varies by definition and ranges from 4–$17\%$ [1,7,8,9,10]. The main finding of the study showed that $13\%$ of participants with an age limit of 6–12 years were at a high risk of SDB. Additionally, $14.4\%$ were considered mouth breathing, and $6.6\%$ had witnessed OSA. In comparison to previous Saudi studies that used PSQ and included children with similar age ranges and comparable gender distribution exhibited $21\%$ and $23\%$ of high-risk SBD which is higher compared to the current study [17,18]. Moreover, samples from the previous studies reported participants with habitual snoring are more reported ($10.7\%$ and $15.9\%$) and witnessed apneas reported to a lesser extent ($3.4\%$ and $4\%$) compared to the present study [17,18]. The variation between our findings and Baidas et al. [ 2018] is due to differences in the operational definition of what constitutes habitual snoring. Moreover, Al Ehaideb et al. [ 2021] conducted a PSQ-based survey study of 285 Saudi children seeking orthodontic treatment and found that $47.7\%$ were at high risk of developing SDB [31]. They also reported $11.3\%$ and $11.6\%$ of their sample to have habitual snoring and witnessed apnea, respectively [31]. These larger numbers could be because their sample was collected from an orthodontic clinic at a tertiary public hospital that receives referrals of cases with moderate or severe forms of malocclusions. Globally, studies that used the PSQ reported the prevalence of children at high risk of developing SDB to range from $7.9\%$ to $12.8\%$ [1,32,33]. The current study shows that there is a significant difference ($$p \leq 0.009$$) in SBD risk factors between younger and older groups. Additionally, it also showed that younger children were 1.43 times more likely to be at high risk of developing SDB compared to older children. In children, enlarged adenoids are the main reason for developing SDB and adenoidectomy/tonsillectomy is considering the primary treatment of SDB in children [9,34]. Adenoids reach their maximum size between the ages of 5 and 7 and begin to shrink afterward [35]. Therefore, enlarged adenoids were more likely to be found in the younger age group in our sample aged 6 to 9 years, and this might be the reason why the younger age group exhibited increased risks of developing SDB. In addition to enlarged adenoids, obesity has also been considered a possible cause of OSA. This study showed that there was a significant association between the high-risk group and children being overweight as perceived by their parents, which agrees with other studies conducted in Saudi Arabia [17,18,31]. In terms of gender distribution, this study showed a significant association between females and OSA, which is opposite to the reported male predilection of pediatric OSA in other studies [1,11,12,14,17]. The reason why there are usually more males affected by OSA is suggested to be due to the differences in the puberty age between males and females, as females enter puberty first. This variation in OSA prevalence between males and females usually increases as they age [1]. This finding reinforced the information from the previous studies where a significant difference between gender and OSA was observed, including children older than 12–13 years [36,37], while studies that did not show a significant association between gender and OSA were mostly limited to the younger age group [1,10]. This might explain why the current study, which was limited to children younger than 12 years old, was not in line with other studies in terms of the association between males and OSA. Nonetheless, it is important to note that the unique contribution of gender in the regression model was not significant in relation to being at high risks of developing SBD, controlling for other variables (including age, habitual snoring, witnessed apnea, parent perception of the child being overweight, and bedwetting). Therefore, cautious interpretation is warranted regarding this study findings in terms of gender association with, and contribution to, being at high risk of developing SBD. It is not surprising that this study showed a significant association between high risks of developing SBD and snoring as snoring is one of the main symptoms of SDB and OSA [1]. Similar findings were observed in other previous studies [6,17,31]. Hence, parents need to appreciate the importance of seeking medical care when their child snores during sleeping. In the future, it would be worthwhile to explore whether the parents who participated in the study have considered medical care for their children, especially those with a high risk of pediatric SDB. One of the common misapplications of PSQ questionnaires in the literature is the interpretation of a “yes” response to question no. 6 in the first domain (Table 1), as many authors have interpreted this as the presence of OSA [17,29]. It is recommended that authors use the term “witnessed apnea” for “yes” responses to this question instead of the inaccurate assumption that the child has OSA or is considered at high risk to develop SDB. The current study also showed that the high-risk group ($19.3\%$) is more associated with OSA compared to the low-risk group ($4.7\%$). Additionally, children with witnessed apneas were 2.84 times more likely to be at high risk for SDB. However, not every child with witnessed apnea was at high risk for SDB. There were 77 children who had witnessed apneas yet were considered low risk for SDB based on the answers to the remaining questions. This study consists of some risk of bias due to the nature of its methodology. Moreover, being overweight was assessed by the parent’s perception. Using the body mass index would have been more objective yet was much harder to do especially given the large sample used in the current study. Additionally, having a PSG would have given a definitive OSA diagnosis. A future study could include a subgroup of the total sample to assess the prevalence more accurately. Although it is not the aim of the study, public awareness programs regarding snoring for children and the risk of pediatric SDB and its symptoms need to be implemented in schools as many local studies have reported similar findings of high rates of snoring and witnessed apnea. ## 5. Conclusions In conclusion, $13\%$ of the school-going children in Al-Kharj Saudi Arabia are at high risk of developing SDB at a younger age. Moreover, habitual snoring, mouth breathing, being overweight, bedwetting, and witnessed apnea were more prevalent in children with a high risk of SDB. Thus, the importance of further exploration of SDB among Saudi school-going children needs to be recognized, strategized, and materialized.
# Hypergravity Increases Blood–Brain Barrier Permeability to Fluorescent Dextran and Antisense Oligonucleotide in Mice ## Abstract The earliest effect of spaceflight is an alteration in vestibular function due to microgravity. Hypergravity exposure induced by centrifugation is also able to provoke motion sickness. The blood–brain barrier (BBB) is the crucial interface between the vascular system and the brain to ensure efficient neuronal activity. We developed experimental protocols of hypergravity on C57Bl/6JRJ mice to induce motion sickness and reveal its effects on the BBB. Mice were centrifuged at 2× g for 24 h. Fluorescent dextrans with different sizes (40, 70 and 150 kDa) and fluorescent antisense oligonucleotides (AS) were injected into mice retro-orbitally. The presence of fluorescent molecules was revealed by epifluorescence and confocal microscopies in brain slices. Gene expression was evaluated by RT-qPCR from brain extracts. Only the 70 kDa dextran and AS were detected in the parenchyma of several brain regions, suggesting an alteration in the BBB. Moreover, Ctnnd1, Gja4 and Actn1 were upregulated, whereas Jup, Tjp2, Gja1, Actn2, Actn4, Cdh2 and *Ocln* genes were downregulated, specifically suggesting a dysregulation in the tight junctions of endothelial cells forming the BBB. Our results confirm the alteration in the BBB after a short period of hypergravity exposure. ## 1. Introduction Astronauts are exposed to successive phases of hypergravity phases during the takeoff and landing of spaceflights and due to microgravity in space. The most important and earliest reported symptom, related to days 1–3 of the spaceflight, is space motion sickness due to vestibular dysfunction [1,2,3,4,5,6,7]. The fluid shift is the shift in the distribution of human body fluids due to microgravity exposure. It was proposed to be responsible for space motion sickness. Several ground devices and protocols were rapidly developed to reproduce this phenomenon, such as centrifugation and parabolic flights [8,9]. Moreover, the decreases in plasma volume and cardiac performance, and the increase in intracranial blood pressure participate in vascular deterioration, as recently reviewed [10,11,12]. Furthermore, the alterations in gravity induce cardiovascular adaptations via the modifications in endothelial and smooth muscle vascular cell functions [13,14,15,16,17]. It is noticeable that the effects of centrifugation are only partially described in humans [18,19,20,21,22]. Like during microgravity exposure, hypergravity exposure, from 1.5 to 5 g, affects the vestibular functions [23,24,25,26] and modifies gene expression in the brain [27,28,29] and cognitive performances [30,31,32]. The use of hypergravity by centrifugation is required to qualify the biological effects of space motion sickness. Likewise, centrifugation, close to 2× g, is also proposed as a countermeasure against the deleterious effects of microgravity seen in humans [33,34]. Therefore, before the exposure of humans to centrifugation, it is important to study its biological impacts. The cerebral blood vessels are crucial in brain functions regarding oxygen supply and exchanges of nutrients and wastes. The endothelial cells of brain capillaries are organized to form the blood–brain barrier (BBB), assuming the fine-tuning of these exchanges to maintain brain homeostasis [35,36]. The efficacy of the BBB is regulated by the nychthemeral rhythms [37,38,39,40]. BBB alterations are clearly implicated in stroke and neurodegenerative disorders [41,42,43,44]. Gravity changes are able to modify endothelial cell functions [45]. Many in vitro models have been developed to reproduce the BBB [46], and experiments that exposed endothelial cells to gravity modifications revealed miscellaneous results, as reviewed [47]. Depending on hypergravity levels from 3 g to 20 g, endothelial cells modify their gene expression, angiogenesis, cytoskeleton architecture and tube formation [48,49,50,51]. Moreover, in devices that reproduce the barrier function, the effects of short-term exposure to hypergravity remain unclear. In fact, exposure (2 g and 4 g) increases the barrier efficacy, shown by resistance measurements of the endothelial cell culture [52], whereas a higher level (10 g) decreases it, as shown by the increase in fluorescent molecules passing through the culture monolayer [53]. The effects of hypergravity on the capacity of endothelial cells to form a barrier in vitro are insufficient to interpret the modifications in the BBB observed in vivo. More information should be collected in vivo. In mice exposed to hypergravity at 2 g for 24 h, we measured the transit through the BBB of different fluorescent molecules with different sizes, such as dextrans and antisense oligonucleotides (AS). We also investigated the regulation of expression of genes involved in junctions between endothelial cells. ## 2.1. Animals and Centrifugation In accordance with the principles of the European community, the experimental protocols were validated by the local ethics committee (CEEA-Loire, APAFIS #38819), the animal welfare committee of PLEXAN (PLateforme d’EXpérimentations et d’ANalyses, Faculty of Medicine, Université Jean Monnet, Saint Etienne, France, agreement n°42-18-0801) and the French Ministry of Research. In this study, 86 male C57BL/6JRJ mice (8 weeks old, 22.5 ± 0.1 g, Janvier Labs, France) were used. The animals were housed (3 mice per cage) in standard conditions (22 °C, humidity $55\%$; 12 h/12 h day/night cycle; unlimited access to food and water). They were familiarized with the centrifugation room the week before the experiments and monitored by video in the centrifuge. In order to expose all the animals to the same environmental conditions, the mice were centrifuged at 2× g for 24 h, and the control mice in normogravity at 1 g for 24 h were placed simultaneously in the experimental room. The centrifugation protocol was detailed in our previous publication. ## 2.2. In Vivo Injection of Antisense Oligonucleotide and Dextrans All the fluorescent molecules were diluted in saline solution (sodium chloride 9 g/L) and retro-orbitally injected in the blood, under isoflurane anesthesia ($5\%$). In our hands, this route of administration is safer (more rapid, efficient and reproducible) than other routes of i.v. administration. Sham mice were injected with vehicle solution. Mice received only one injection with one fluorescent tracer. Phosphorothioate antisense oligonucleotide directed against angiopoietin-2 (Angpt2, named AS, GCG-TTA-GAC-ATG-TAG-GG, 6084.9 g/mol, Eurogentec) was coupled to 5-carboxyfluorescein (excitation: 492 nm, blue light; emission: 518 nm, green light) and injected (18 mg/Kg). Fluorescein isothiocyanate-dextrans D40, D70 and D150 (FD40-100MG, FD70S-100MG, FD150S-1G, respectively, Sigma-Aldrich, St. Louis, MI, USA) were solubilized in vehicle (2× g/100 mL to be injected retro-orbitally at 150 mg/Kg, near 200 µL/mouse). Fluorescein isothiocyanate-dextrans were maximally excited at 490 nm (blue) and maximally emitted at 525 nm (green). ## 2.3. Collection of Biological Samples Mice were randomly killed by lethal intraperitoneal injection of sodium pentobarbital (Euthasol, 175 mg/Kg, i.p.), within 2 h after stopping the centrifuge. Before intracardiac perfusion, a catheter was introduced in the right atrium, and blood samples were collected and placed in microtubes. Finally, mice were perfused intracardiacally (5 mL/min) with 30 mL of phosphate-buffered saline (0.01 M PBS, pH: 7.4) to discard blood cells and residual fluorescence of the injected tracers into vessel lumen. This step was followed by 30 mL formalin solution ($10\%$, Merck, HT501128) to fix the tissues. Brains and the left lobe of livers were dissected and post-fixed for 24 h in a formalin solution at room temperature, placed for 48 h in a $30\%$ sucrose–PBS solution at 4 °C and cryopreserved before being sliced. ## 2.4. Corticosterone Assay The microtubes containing blood samples were centrifuged (10 min at 2000× g) and 20 µL of serum was collected. Some serum samples were excluded due to hemolysis. The others ($$n = 60$$) were used for corticosterone assay (ELISA kit, K014, Arbor Assays, Ann Arbor, MI, USA), following the protocol of the supplier. ## 2.5. Histology Using a freezing microtome (frigomobil, Reichert-Jung), coronal sections of the brain (40 μm thick) were made. Olfactory bulbs were removed, and 3.2 mm after beginning the rostro-caudal slicing, the new slices were collected and placed individually in 48-well plates. To ensure reproducibility, we anatomically selected three similar brain slices for each mouse. Using a binocular device, the slices corresponding to interaural 1.98 mm; Begma −1.82 mm of the Atlas of the mouse brain in stereotaxic coordinates [54] were retained. Indeed, the medial habenular nuclei and mammillothalamic tract were anatomical landmarks, as well as the form and volume of the hippocampus. In the same manner, the left lobes of the liver were sliced (40 µm), and three slices per mouse were mounted. All the floating sections were incubated for 10 min in DAPI (4′,6-diamidino-2-phenylindole, 1:250,000, Interchim, Mannheim, Germany) and rinsed twice in PBS (10 and 20 min, respectively). Finally, they were mounted on glass slides (Superfrost) with a handmade medium based on Mowiol. All slices were DAPI-labeled and mounted on the same day. Slices presenting red blood cells in capillaries in ROI were excluded to reduce experimental bias [55]. ## 2.6. Image Acquisition The fluorescence of labeled brain slices was observed by confocal microscopy (SP5, Leica Microsystems, Wetzlar, Germany) and the slide scanner Nanozoomer (2.0 HT, Hamamatsu Photonics, Shizuoka Prefecture, Japan). The Nanozoomer contains a fluorescence imaging module using objective UPS APO 20X NA 0.75 combined with an additional lens 1.75X. Virtual slides were acquired with a TDI-3 CCD camera. The fluorescent acquisitions were conducted with a mercury lamp (LX2000 200W—Hamamatsu Photonics, Massy, France), and the set of filters adapted for DAPI and FITC/FAM fluorescence were usable for both fluorescein isothiocyanate-dextrans and 5-carboxyfluorescein antisense oligonucleotide. The DAPI labeling, revealing the double strain of DNA in the cell nuclei, was used for the automated focus required for Nanozoomer imaging. To reduce bias, all images (slices from control and centrifuged mice) were performed randomly in one batch. To localize antisense oligonucleotides in the brain and liver tissues, some images were acquired with SP5 confocal microscope. In this case, fluorescent molecules were excited with the 488 nm line of Argon laser and all acquisition parameters were kept constant. ## 2.7. Fluorescence Analyses Several types of fluorescence analyses were double-blindly performed on Nanozoomer images. To evaluate the intensity level of fluorescence, the ndpi files generated by Nanozoomer were converted into tiff images with NDPI software (version 2). The tiff files were opened with Fiji software 2.9.0, and the intensity levels were measured in regions of interest (ROI defined as red circle of 960 µm2 in Figures and placed on the hippocampus (HPC), dorsal thalamic nuclei (THAL) and the retrosplenial and ectorhinal cortices (DCx and LCx, respectively) on both hemispheres of the three slices. No filter settings were applied to the images and we checked that the images did not have any saturated dots. The mean of fluorescence was calculated for each mouse and reported in the statistical analysis. A similar analysis was performed in three liver slices. Five ROI were randomly placed on each slice. Moreover, the image analysis of fluorescent spots was performed with QuPath directly on the ndpi files. The software is able to identify and localize fluorescent spots. We empirically determined parameters to segregate fluorescent spots in brain slices from 5 mice (control and centrifuged mice) and we applied these parameters to the project containing the entire sample. The parameters were: pixel size 0.5 µm, background radius 30 µm, median filter radius 0, sigma 1, minimum area 5 µm2, maximal area 1000 µm2 and threshold 7. The collected data were attributed to experimental groups (2 g vs. 1 g) and compared statistically. The analyses, reported, were performed on the ROI anatomically defined as HPC (hippocampus), THAL (grouping all medio-dorsal and lateral thalamic nuclei), DCx (containing retrosplenial cortices), SoCx (containing somatosensorial cortices) and PirCx (containing piriform cortices). A similar analysis was performed on the left lobe liver slices. ## 2.8. Gene Expression by RT-qPCR For this experiment, 16 mice were used (8 were exposed to 2 g and 8 to 1 g, as described before). They were anesthetized with isoflurane $5\%$ and decapitated, and the brains were directly frozen and stored at −80 °C. Hippocampus were dissected on ice and placed in 2 mL tubes containing 500 µL of Tri-reagent (MRCgene) and 10 ceramic beads (diameter 1.5 mm). Samples were mashed in a Beadbug6 shaker (Benchmark, 3 cycles, level of speed 4350 and 60 s time). RNA was isolated, following the instruction of the protocol elaborated by MRCgene. The concentration of RNA was measured with Nanodrop (Thermoscientific, Waltham, MA, USA) and adjusted close to 100 ng/µL. The cDNA was produced with the RT-i-script gDNA clear cDNA synthesis kit (Bio-Rad’s reference 1725035), using 100 ng of RNA and following the protocol from the supplier. The qPCR was performed using the endothelial cell contacts by junction M96 (predesigned for use with SYBR green; Bio-Rad’s plate reference 10029202) and the Sso-advanced universal SYBR green PCR kit (Bio-Rad’s reference 1725275). The qPCR was performed with CFX96 thermocycler (Bio-Rad). Samples were allocated randomly in plates, and some of them were tested twice to verify the quality of the experiment. The validation of Hprt and Gapdh as reference genes was evaluated with CFX Maestro software (Bio-Rad). The analysis of gene expression was performed on Actb, Actg1, Actn1, Actn2, Actn4, Cdh2, Cdh5, Cldn1, Cldn3, Cldn5, Ctnna1, Ctnnb1, Ctnnd1, Dsp, F11r, Gja1, Gja4, Gja5, Jam2, Jup, Ocln, Tjp1, Tjp2 and Vim. The threshold of the regulation by hypergravity on gene expression was chosen at 1.5. To discuss the RT-qPCR results, we checked the brain localization, cell types expressing genes and function of proteins encoded by these genes in endothelial cells using databases: https://www.proteinatlas.org; http://mousebrain.org; http://betsholtzlab.org and https://www.informatics.jax.org (accessed on 26 January 2023). ## 2.9. Statistical Analysis The data were statistically compared using paired t-tests, non-parametric Mann–Whitney test, or one- and two-way ANOVA with post hoc comparisons when applicable. The normogravity (1 g) is the control condition. The software used was GraphPad Prism V9, which calculated the p value as the probability of observing two identical conditions. If $p \leq 0.05$, the two compared conditions were considered statistically different. ## 3.1. Effects of Centrifugation on Mice The body weight gain, expressed as the difference in weight in a 24 h period (Figure 1), is the difference in body weight measured before and after exposure to centrifugation (2× g) or control conditions (1 g). As expected, the exposure to centrifugation induced a decrease in body weight (Figure 1A, $p \leq 0.0001$). More precisely, the decrease in body weight was similar in mice injected with saline solution (Sham) and solution containing fluorescent antisense oligonucleotide directed against angiopoietin-2 (AS) (two-way ANOVA coupled with Sidak post hoc test, interaction $$p \leq 0.0009$$; 1 g vs. 2 g: $p \leq 0.0001$; sham vs. AS $$p \leq 0.589$$; Figure 1B). The decrease in body weight gain due to centrifugation was similarly observed in mice injected with dextrans (D40, D70 and D150, one-way ANOVA coupled with Sidak post hoc test, 1 g vs. 2 g: $p \leq 0.0001$; $p \leq 0.05$ for comparison of 1 g groups as well as for 2 g groups, Figure 1C). The effects observed in mice injected with AS or dextrans (D40, D70 and D150) were similar (statistical analysis shown in Figure 1C). In conclusion, the injection of fluorescent tracers did not influence the effect of centrifugation on body weight gain. To explain the decrease in body weight, we also measured the food and water consumption. As shown in Figure 1D and 1E, the comparison of food and water consumption, respectively, during the day before the centrifugation with the consumption during the 24 h of centrifugation exposure showed that both food and water consumption specifically decreased in the group exposed to the centrifugation (two-way ANOVA, time x gravity $$p \leq 0.0001$$ for both parameters). The stress was evaluated by the concentration of corticosterone in plasma. The comparison between 1 g and 2 g conditions, including all the samples, did not reveal a variation in corticosterone concentration (Figure 2A, Mann–Whitney test, $$p \leq 0.255$$). We also separately analyzed the corticosterone concentration in each experimental group. In mice injected with saline solution (Sham), AS, D40, D70 and D150, the centrifugation had no significant effect on the corticosterone concentration (Figure 2B, one-way ANOVA, $$p \leq 0.278$$). In conclusion, centrifugation at 2× g did not modify the plasma concentration of corticosterone in mice injected with fluorescent tracers. ## 3.2. Effects of Centrifugation on Extravasation of Fluorescent Dextrans in Brain The extravasation of dextrans through the BBB were firstly evaluated by the analysis of fluorescence intensities of several brain areas. To minimize local variations, we performed all analyses on slices containing similar anatomical landmarks. The regions of interest were distributed in different cerebral areas (red ROI in thalamus, hippocampus and dorsal and lateral cortices, Figure 3). The centrifugation was not able to modify the fluorescent levels in THAL (Figure 3A, Mann–Whitney tests comparing 1 g vs. 2 g conditions for D40, $$p \leq 0.93$$; for D70 $$p \leq 0.29$$ and for D150 $$p \leq 0.069$$), HPC (Figure 3B, Mann–Whitney tests comparing 1 g vs. 2 g conditions for D40 $$p \leq 0.53$$; for D70 $$p \leq 0.76$$ and for D150 $$p \leq 0.089$$) and LCx (Figure 3C, Mann–Whitney tests comparing 1 g vs. 2 g conditions for D40 $$p \leq 0.50$$; for D70 $$p \leq 0.08$$ and for D150 $$p \leq 0.16$$). In DCx, the hypergravity can increase the level of fluorescence only in D70 (Mann–Whitney tests comparing 1 g vs. 2 g conditions for D40 $$p \leq 0.051$$; for D70 $$p \leq 0.040$$ and for D150 $$p \leq 0.050$$; Figure 3D). The differences in D70 fluorescence across brain sections are illustrated in Figure 3E. More marked fluorescence diffusion is observed in the DCx of 2 g-exposed mice. In conclusion, these sets of data analysis suggested that centrifugation significantly increased the presence of D70 in DCx. ## 3.3. Effects of Centrifugation on Extravasation of Fluorescent AS in Liver We tested the ability of hypergravity to promote the passage of a molecule that can be captured by liver parenchyma cells. To test this hypothesis, we injected mice with fluorescent antisense oligonucleotides and compared the 1 g and 2 g conditions. The same quantification methods used for dextrans were applied on images obtained with Nanozoomer (Figure 4A). A significant increase in fluorescence in liver parenchyma was revealed in mice exposed to hypergravity (Figure 4B, Mann–Whitney tests, 1 g vs. 2 g $$p \leq 0.0291$$). Moreover, the number of areas containing fluorescence evaluated with QuPath was higher in 2 g in comparison with 1 g (Figure 4C, Mann–Whitney tests, 1 g vs. 2 g $p \leq 0.0001$). With confocal microscopy, the presence of AS was qualitatively revealed as spots of fluorescence close to vessel walls in the liver parenchyma. Taken together, these results strongly suggest that hypergravity increased the AS extravasation in the liver parenchyma. ## 3.4. Effects of Centrifugation on Extravasation of Fluorescent AS in Brain The qualitative analysis of images obtained with Nanozoomer and confocal SP5 showed fluorescent spots in the brain parenchyma only in slices from 2 g-exposed mice (Figure 5A and Figure 6A). The confocal images also revealed that these fluorescent spots were more localized in the brain parenchyma close to the vessel walls (Figure 5A, right panel). The quantitative analyses of images from Nanozoomer showed an increase in fluorescence level in HPC and DCx due to hypergravity exposure (Mann–Whitney tests, 1 g vs. 2 g in THAL $$p \leq 0.369$$, in HPC $$p \leq 0.033$$, in DCx $$p \leq 0.016$$ and in LCx $$p \leq 0.265$$; Figure 5B). The analysis with QuPath software was used to segregate fluorescent areas from the background in several brain regions (Figure 6A) using the same filtering parameters in both 1 g and 2 g conditions. The analyses confirmed that the exposure to hypergravity increased the number of fluorescent spots in HPC and DCx, but not in THAL (Mann–Whitney tests, 1 g vs. 2 g in THAL $$p \leq 0.0536$$, in HPC $$p \leq 0.0003$$ and in DCx $p \leq 0.0001$; Figure 6B). Moreover, it also revealed an increase in the number of fluorescent spots in SoCx and PirCx (Mann–Whitney tests, 1 g vs. 2 g $p \leq 0.0001$ and $$p \leq 0.0024$$, respectively; Figure 6B). In conclusion, our data suggest that hypergravity induced a BBB leakage able to increase the presence of AS in brain parenchyma. ## 3.5. Effects of Centrifugation on Expression of Genes Involved in Endothelial Cells Interactions Using Hprt and Gapdh as reference genes, the RT-qPCR analysis of the expression of genes involved in the regulation of endothelial cells interactions revealed that Gja4, Ctnnd1 and Actn1 were upregulated. Cdh2 was downregulated more than 2-fold and Ocln, Actn2, Jup, Actn4, Tjp2 and Gja1 were downregulated between 1.5- and 2-fold (Figure 7). The expressions of Actb, Actg1, Cdh5, Cldn1, Cldn5, Ctnna1, Ctnnd1, Dsp, F11r, Gja5, Jam2, Tjp1 and Vim were considered not altered (less than 1.5-fold modification), and Cldn3 appeared not expressed. The names, functions and cell types expressing these genes are summarized in Table 1 and supplementary Table S1. ## 4. Discussion In the present study, our results suggest that hypergravity induces an increase in BBB permeability, allowing the passage of antisense oligonucleotides as well as dextran from blood to brain parenchyma. Moreover, the RT-qPCR experiments suggested an alteration in the expression of genes involved in endothelial cell junctions. In a ground model of hindlimb unloaded animals without [56] or in combination with radiation [57], as well as during spaceflight [58], the BBB was altered, suggesting that vestibular regulations were involved. As reviewed recently, the increase in gravity by centrifugation modifies vestibular function and induces motion sickness [59]. Our experiments confirm the decrease in body weight generated by centrifugation [18,60]. It is linked with the decrease in food intake [61], and probably linked to vestibular impairments [18,62]. Hypergravity exposure at 2 g increases the corticosterone concentration when it is measured during the first hour of exposure [63]. The increase in the hypergravity level can transiently increase the plasma corticosterone level [64]. Nevertheless, as our data showed, after 24 h of weak exposure at 2 g, the corticosterone levels were not altered in the first hour following the stop of the centrifuge [65]. The stress induced by the centrifugation is controversial and probably depends on the design of the centrifuge and experimental procedure with animals [18,27,63]. Moreover, our data showed a large spread of individual values of corticosterone concentration, confirming other studies [18,23,65]. In motion sickness, the relationship between brain and intestinal functions were known and clearly demonstrated, including microgravity and hypergravity models [66,67,68]. The most probable link is hypophagia. In mice and rats, the decrease in food intake was observed at the beginning of the 2 g exposure (first two days) and depended on the vestibular organ [18,69]. The hypophagia could have several causes, including: 1. modifications in microbiota [70] that can decrease the gastric acid synthesis [71], 2. metabolism dysregulation, such as decreases in leptin and insulin plasma concentrations [60] and 3. modifications in the expression of the starvation-induced genes [72]. Moreover, the serotonin pathways are involved in this phenomenon [69,73]. In conclusion, our results also confirm that the hypophagia induced a decrease in body weight. This is more related to the hypergravity and not related to an increase in corticosterone levels [30,60,62,65]. Fluorescent polysaccharides such as dextrans are safe at low concentrations, available in sizes from 3 to 2000 kDa. They can be used to study BBB permeability [74,75,76] and to determine the size of a BBB leak [77,78,79]. After 24 h exposure to 2 g hypergravity, our results demonstrated that 70 kDa dextran can be exported in cortex parenchyma, but not 40 or 150 kDa dextran. The lower molecular weight dextrans, the faster they are excreted. In fact, in less than one hour, dextrans between 30 and 40 kDa were excreted in urine, whereas the 62 kDa dextran was always present in the blood circulation and not highly present in urine [80]. This suggests that after 24 h, the 40 kDa dextran would be excreted. Thereby, the BBB leak required more than one hour of hypergravity exposure, confirming our previous data showing that short exposure (1–9 min at 5 g) was not efficient in destabilizing the BBB [65]. Because we cannot exclude an alteration in urine excretion of dextran in the hypergravity context, our data should be completed by the evaluation of the kinetics of dextran excretion in centrifuged mice. Because of the molecule shape, 150 kDa dextran was unable to flow from the circulation to the tissues in physiological conditions [80]. Our results showed that the BBB leak is not sufficient for 150 kDa dextran extravasation, suggesting that this leak was not comparable to the BBB disruption induced by stroke or acute hypertension [81]. In our previous study [65], the extravasation of IgG (around 150 kDa) was measured, suggesting that the nature of the molecule is also a crucial parameter. Moreover, our data showing the extravasation of antisense oligonucleotide in the cortex and hippocampus confirm that the BBB properties depend on the brain areas and the chemical nature of the markers [82,83,84]. In conclusion, our results showed an increase in the transfer of fluorescent molecules from blood to tissues, suggesting a global modification in effluxes due to hypergravity. To assess the alterations in the BBB in centrifuged mice, we focused the molecular investigation on gene expression using a set of primers targeting consensual genes involved in BBB efficacy. As reviewed recently [85,86], all of the proteins encoded by the genes studied here are involved in the scaffoldings required to maintain endothelial cell interactions to create the BBB, as well as in the initiation of angiogenesis and/or vascular repair. The database queries concerning the expression level in non-neuronal cells of the brain indicated that the proteins encoded by studied genes are also expressed in endothelial cells, but not exclusively (Supplementary Table S2). As expected, the modifications in gene expression are related to the durations of centrifugation and the levels of hypergravity, as suggested by the comparison between this current study and the RNAseq performed previously on the same device and the same mouse strains [29]. Moreover, the regulation of gene expression is not comparable to acute and chronic stress (Supplementary Table S3). Globally, the observed modifications could be interpreted as a specific dysregulation of gene expression that can alter the turnover and replacement of proteins involved in BBB efficacy as observed in BBB disruption models such as stroke, middle cerebral artery occlusion or hypoxia. ## 5. Conclusions This work suggests that the modification in gravity, which is accompanied by a modification in the vestibular functions, leads to an alteration in the BBB via a modification in the expression of genes which code the proteins in the junctions between the endothelial cells. As now studied, an alteration in the BBB, and not its destruction, allows the passage of molecules defined by their sizes and their chemical natures. Our work insists on this point; an alteration in the BBB is characterized according to the means of study, i.e., markers and measurement methods. This can be considered in two antagonistic ways, either as a minimally invasive physical means of crossing the BBB by molecules of therapeutic interest or, on the contrary, as something deleterious that can be found in the pathology of alterations in vestibular functions during spaceflight. The most important limit of this study is that the RT-qPCR was performed on RNA extracted from whole brain, and the query of hipposeq.janelia.org indicated that we cannot exclude the alteration in molecular scaffolding of synapses also implicating these genes. Finally, our study can be considered an extension of studies relating to the effectiveness of molecules to modulate the passage across the BBB. In a hypergravity context, but also in other models of alteration in vestibular functions, the transduction pathways involved in alterations in the BBB should also be investigated. For example, the angiopoietin-2 pathway is crucial for endothelial cell disassembly [87], and GPCR internalization in endothelial cells [88] should be considered in the context of centrifugation. The last topic that we can investigate is the effects of gravity modulation on angiogenesis, which is required to renovate the endothelium and form new brain capillaries. In fact, experiments on cultured endothelial cells have suggested that hypergravity reduces their capacity to form tubes and alters their responses to angiogenic factors [48,49,50,51]. In centrifugation as well as during parabolic flights, the in vivo responses to angiogenic factors have not yet been investigated. Moreover, it has been shown that during the takeoff and landing of a space module (BION-M 1), hypergravity induces cardiovascular changes [89]. More experiments should be conducted to precise how these cardiovascular changes can modify the structure of the BBB and neurovascular unit functions. To restore physiological functions after spaceflight or bed-rest in humans, a daily sequence of short exposure to centrifugation close to 2× g has been proposed. It is crucial to verify if this protocol has any effect on the BBB. Recently, biomarkers of BBB alteration have been listed [90], and their investigation in the spaceflight context should be performed. Finally, centrifugation can be considered to potentiate vectorization and should be used to investigate cell functions with antisense oligonucleotides in pathophysiological contexts.
# Colloidal Nanoparticles Isolated from Duck Soup Exhibit Antioxidant Effect on Macrophages and Enterocytes ## Abstract Food-derived colloidal nanoparticles (CNPs) have been found in many food cooking processes, and their specific effects on human health need to be further explored. Here, we report on the successful isolation of CNPs from duck soup. The hydrodynamic diameters of the obtained CNPs were 255.23 ± 12.77 nm, which comprised lipids ($51.2\%$), protein ($30.8\%$), and carbohydrates ($7.9\%$). As indicated by the tests of free radical scavenging and ferric reducing capacities, the CNPs possessed remarkable antioxidant activity. Macrophages and enterocytes are essential for intestinal homeostasis. Therefore, RAW 264.7 and Caco-2 were applied to establish an oxidative stress model to investigate the antioxidant characteristics of the CNPs. The results showed that the CNPs from duck soup could be engulfed by these two cell lines, and could significantly alleviate 2,2′-Azobis(2-methylpropionamidine) dihydrochloride (AAPH)-induced oxidative damage. It indicates that the intake of duck soup is beneficial for intestinal health. These data contribute to revealing the underlying functional mechanism of Chinese traditional duck soup and the development of food-derived functional components. ## 1. Introduction Food-grade nanoparticles, such as nanoemulsions and liposomes, have been successfully developed with excellent stability and efficacy [1,2]. Soups can produce colloidal nanoparticles (CNPs) in the case of self-assembly, which have been observed in soups made with clams [3], Chinese medicine [4], and pig bones [5]. Nutrients migrate from food raw materials to water, and then form self-assembling particles between molecules that interact with each other covalently and noncovalently during heating, ranging in size from the nanometer to the micron scale. Soups have rich flavors and complex components, and the CNPs in them change the degree of digestion and the absorption of nutrients in raw materials [6,7], whereas further research is needed to investigate the ingestion and functioning of these nanoparticles. Macrophages are immune cells that serve a variety of purposes. They are widely distributed throughout the body and serve as significant study subjects for cellular phagocytosis, cellular immunity, and molecular immunology. Macrophages have a strategic role in intestinal homeostasis and intestinal physiology [8]. RAW 264.7 is considered to be one of the best models of macrophages, and it has several uses in the study of inflammation, immunity, apoptosis, and tumor research [9,10]. The colonic mucosal epithelium is the fulcrum that maintains intestinal homeostasis, and these barrier-forming cells can precisely control redox signaling and thus avoid tissue damage [11]. The interactions between food and intestine have been studied using the Caco-2 cell model, which is well adapted for this purpose [12]. Oxidative stress is caused by the imbalance between oxidation and antioxidation in biological systems. It usually leads to the excess accumulation of reactive oxygen species (ROS) and induces the damage of cellular components and cell apoptosis [13,14]. A large number of studies on oxidative stress and anti-inflammatory processes have conducted out in Caco-2 cells or RAW 264.7 cells, confirming the representativeness of such cell models [15,16,17]. Therefore, studying the interactions of food-derived CNPs with both macrophages and enterocytes should be appropriate for revealing the effects of CNPs on intestinal health. Duck meat, as a quality and nutritious meat resource, is becoming more and more well-liked by consumers worldwide, especially in Asia [18]. In China, old duck is often used as the raw material for duck soup, and is it believed to have a curative effect on inflammation. The aged ducks (500 days of age) were found to have a significant antioxidant capacity, with abundant metabolites [19]. However, the CNPs in duck soup, and their biological effects on intestine have not been characterized yet. To further understand the bioactivity of duck soup, the CNPs were isolated from duck soup, and their interactions with Caco-2 cells and RAW 264.7 cells were investigated to reveal the antioxidant effects of these CNPs on the intestinal tract. Therefore, this work should expand our knowledge of the biological function mechanism of food soup, and contribute to the development of gastrointestinal protection. ## 2.1. Materials All the analytical-grade reagents used were purchased from Sinopharm Chemical Reagent Co., Ltd. (Shanghai, China), including sodium dihydrogen phosphate, glucose, sodium chloride, sodium hydroxide, and disodium hydrogen phosphate. Triglyceride kits, BCA protein assay kits, total antioxidant capacity colorimetric (T-AOC) assay kits (FRAP method), 2,2′-Azinobis-(3-ethylbenzthiazoline-6-sulphonate) (ABTS), cell counting kit-8 (CCK-8), and 1,1-diphenyl-2-trinitrophenylhydrazine (DPPH) were provided by Sangon Biotech Co., Ltd. (Shanghai, China). The following products were purchased from Sigma-Aldrich Co., Ltd. (Shanghai, China): Hoechst 33342 staining solution, dimethyl sulfoxide (DMSO), DiBAC4[3] staining solution, 2,2′-Azobis(2-methylpropionamidine) dihydrochloride (AAPH), and penicillin–streptomycin (100×, Sterile). The following products were obtained from Invitrogen Co., Ltd. (Carlsbad, CA, USA): Fetal bovine serum (FBS), $25\%$ Pancreatin + Ethylene Diamine Tetraacetic Acid (EDTA), Dulbecco’s modified minimal essential medium (DMEM), MitoSOX Red staining solution, phosphate buffered saline (PBS), Hank’s balanced salt solution (HBSS), and minimal essential medium (MEM). The RAW 264.7 and Caco-2 cell lines were procured from the BeNa Culture Collection (Su Zhou, China). The HiPrep $\frac{16}{60}$ Sephacryl S-500HR (1.0 × 120 cm) column was purchased from General Electric Company (Fairfield, CT, USA). Cell culture flasks, 96-well plates (black transparent flat bottom), and 24-well plates were purchased from Corning Company (Corning, NY, USA). ## 2.2. The Preparation of Duck Soup Fresh 600-day-old Sheldrake carcasses (Huaying, Xinyang, China) were purchased. The duck breast meat was cut into square pieces with a side length of about 2 cm, blanched in boiling water to remove blood, and then washed on the surface in clean water. Meat pieces were cooked in deionized water (meat/water, w/v, 1:3) for 3 h at 100 °C, and heated with an induction cooker (300 W), during which the water loss caused by cooking was supplemented according to the liquid level. The duck soup was filtered twice with eight layers of cotton gauze to remove solids, and then stored at −40 °C for future use [6]. ## 2.3. Separation of the CNPs from the Duck Soup The CNPs of duck soup were separated according to a reported method, with some modifications [6]. The duck soup was filtered through a 0.45 μm filter membrane after being centrifuged at 400× g for 10 min. Four milliliters of duck soup supernatant were separated using a pre-equilibrated chromatographic column equipped with AKTA avant150 (General Electric Company, Fairfield, Connecticut). The concentration of phosphate buffer was adjusted to 0.02 M. The flow rate was 1 mL/min. The automatic collector was set at 4 mL/tube, and the UV monitor was set at 280 nm. The eluent at each stage isolated from the column was labeled as Fn (n for the scientific count), depending on the peak time. Each Fn (1 mL) was gently injected into the sample pool of dynamic light scattering (DLS) (Malvern, UK) at 25 °C for measurement. The viscosity and refractive index (RI) were 0.8872 and 1.330, respectively [5]. The CNPs were selected from the eluents based on the polymer dispersity index (PDI), the hydrodynamic diameter, light scattering, and the ζ-potential. Each determination was repeated three times. ## 2.4. The Morphologies of the CNPs The CNPs were dripped onto the copper net covered with Formvar films; the excess solution was slowly wiped off with filter paper along the edge of the copper mesh, and dripped with $1\%$ uranyl acetate for dyeing. Morphologies were observed under a transmission electron microscope (TEM) at 80 kV. ## 2.5. Major Compositions Analysis of the CNPs The major compositions (lipids, proteins, and carbohydrates) of the obtained CNPs were detected [20]. A protein detection was performed according to the method of BCA protein quantitative kits. An anthrone-sulfuric acid test was used to assess the number of polysaccharides in CNPs. According to the method provided by the kits, the content of triglyceride in the sample was measured using the GPO-PAP enzyme assay. The absorbance measurement was conducted with a multifunctional microplate reader (Infinite 200 PRO, TECAN, Switzerland). All of the above indexes were tested 3 times. ## 2.6. Determination of Antioxidant Activities The ABTS free radical scavenging capacity of CNPs was evaluated based on a published method, with slight modifications [21]. The ABTS powder was weighed and prepared to 7 mM, and the potassium persulfate reagent was weighed and prepared to 140 mM. Then, 5 mL of ABTS solution was mixed with 88 μL potassium persulfate solution and placed away from light for 12–16 h. The mixture was then diluted 50 times, with distilled water as the ABTS+ reserve solution. The ABTS+ reserve solution (200 μL) and the sample solution (50 μL) were absorbed and added into the enzyme-labeled plate. The absorbance was measured at 734 nm after standing for 10 min in a dark environment at room temperature, and the measurement was repeated 3 times. The reported method was used to test the ferric reducing antioxidant power (FRAP) of the CNPs [20]. The 100 mM FeSO4·7H2O solution was diluted with deionized water to 0.15, 0.30, 0.60, 0.90, 1.20, and 1.50 mM as the standard for calibration curves. Samples (5 μL) and FeSO4·7H2O standard (5 μL) were added into 96-well plates in equal quantities, then FRAP solution reagent (5 μL) was added into each well and incubated at 37 °C for 5 min, and distilled water (5 μL) was used as the control. Finally, absorbance was measured at the wavelength of 593 nm, and test temperature was set at 37 °C, and the measurement was repeated 3 times. The DPPH radical scavenging ability of CNPs was tested, the published method was slightly modified (the ratio of DPPH solution to sample solution was 1:1, 100 µL) [22]. The 0.1 mM DPPH solution (100 μL) and sample solution (100 μL) were absorbed, then added to 96-well plates and mixed. After being kept in dark for 30 min, the plate was placed on the enzyme label analyzer, the absorbance was recorded at 517 nm, and the measurement was repeated 3 times. ## 2.7. Toxicity Test of the CNPs on Cells Using the CCK-8 kit, the toxicity of CNPs to Raw 264.7 and Caco-2 cells was investigated. The cells (100 µL, 5 × 104 cell/well) were inoculated into 96-well plates and incubated overnight in incubators (37 °C, $5\%$ CO2). Each well of the cells was incubated for 24 h with 100 μL of CNPs at different concentrations, and mixed with 10 μL of CCK-8 solution for 4 h. The absorbance values were measured at 450 nm, and the cell viability was calculated by referring to Gao et al. [ 23]. Each determination was repeated three times. ## 2.8. Observation of the Uptake of CNPs by Raw 264.7 Cells and Caco-2 Cells Nile red (1 µg/mL) was mixed with CNPs (1 mg/mL) and incubated for one hour at 40 °C. The filtrate was extracted via centrifugation at 4000× g for 5 min. The retained particles were washed with HBSS and centrifuged, and repeated several times until no red fluorescence was observed in the filtrate. The remaining particles were re-suspended in HBSS for use. The cell suspension was inoculated at 5 × 104 cell/well in 24-well plates, and incubated overnight (37 °C, $5\%$ CO2). Then, the medium was removed and HBSS was used to wash the cells twice. The cells were fixed with $4\%$ paraformaldehyde and stained with Hoechst 33342 (1–10 µg/mL). The Hoechst 33342-stained cells and Nile Red-tagged colloidal particles were mixed and incubated for 3 h. Fluorescence was observed via inversed fluorescent microscope (IX-53, Olympus, Japan). The excitation and emission wavelengths of Nile red were 549 nm and 628 nm, respectively. The excitation and emission wavelengths of Hoechst 33342 were 346 and 460 nm, respectively. The instrument provided software for observation under a unified background, and the whole experiment was carried out three times in a dark environment [20]. ## 2.9. Detection of Cell Membrane Potential and Mitochondrial Superoxide DiBAC4 [3] staining solution (5 µm) and Mito-sox Red staining solution (2.5 µm) were applied for the determination of cell membrane potential and mitochondrial superoxide, respectively, using HBSS as the solvent. The procedures were as follows: 200 μL of 5 × 104 cell/well cells were seeded in a black 96-microwell plate and cultured overnight in an incubator (37 °C, $5\%$ CO2). The staining solution was added to each well at a dosage of 100 µL, and the excess staining solution was cleared after a certain period of incubation (30 min for DiBAC4 [3] and 10 min for Mito-Sox Red). Then, 100 µL of various concentrations (100 µg/mL, 500 µg/mL, and 1000 µg/mL) of the CNPs, and HBSS (control) were added, and then 50 µL AAPH (6.4 µm) was added and incubated for 30 min. Finally, 510 nm and 580 nm were chosen as the excitation and emission wavelengths, respectively, and the fluorescence intensity was observed under an inverted fluorescence microscope. Each determination was repeated three times. ## 2.10. Statistical Analysis The data were presented as mean ± standard deviation. Statistical differences were examined via a one-way analysis of variance (ANOVA) combined with Duncan multiple comparison. The significance level was set at $p \leq 0.05.$ Graphs were performed by Origin 2019 (Origin Lab, Northampton, MA, USA). ## 3.1. Isolation and Properties of the CNPs Three eluents, F1, F2, and F3, were separated and collected in the range of 100 to 160 min, among which F1 in the range of 100 to 120 min had a stronger signal of light scattering intensity than other eluents (Figure 1A). As shown in Table 1, the average hydrodynamic diameters of F1, F2, and F3 were 255 nm, 220 nm, and 147 nm, respectively, and the light scattering intensity in F1 was roughly three times greater than in F2. It has been indicated that larger particle sizes of CNPs are more efficiently phagocytosed by macrophages [24]. The minimum PDI of F1 indicated a narrow sample size distribution, while the particles in F2 and F3 may not have a uniform size. The maximum negative ζ-potential of F3 indicated a greater ionic bond interaction with the chromatographic gel, resulting in a delay in separation. The TEM micrograph of F1 confirmed that the particles contained in F1 had a uniform spheroid shape (Figure 1B). Thus, the representative F1 was chosen to learn the nano-functional properties of the CNPs from duck soup. ## 3.2. Major Components and Antioxidant Activities of CNPs The CNPs obtained after 3 h of continuous simmering of the duck soup had a lipids content of $51.2\%$, followed by $30.8\%$ of proteins and $7.9\%$ of sugars (Table 2). It has been found that the constituent proteins in the particles are mainly associated with antioxidant activity [25], and that the second most abundant protein in CNPs offers the possibility of antioxidant activity. Protein extracts from duck meat have been shown to have a good ability in antioxidant and free radical scavenging [26]. The comparison revealed that the composition of CNPs in the duck soup was similar to that of porcine bone soup [20], but significantly different from that of freshwater clam [23], which contained $60\%$ carbohydrates. This may be due to the differential compositions between clams and duck meat, indicating that the formation of CNPs in the soup should be closely associated with the ingredients of raw materials. It is common practice to utilize spectrophotometric techniques to assess food antioxidant potential, including the determinations of ABTS and DPPH, both of which involve the scavenging of free radicals [27]. Another method monitoring the iron ion reducing capacity is expressed as FRAP, and a high FRAP value indicates a stronger antioxidant activity [28]. As the CNP concentration increased, the antioxidant capacity showed an increasing trend in Figure 2, demonstrating that CNPs had a powerful antioxidant capability, but the effect of the high concentration of CNPs on cells needs to be explored. ## 3.3. Cytotoxicities of CNPs Some nanoparticles that are used as food additives are toxic to Caco-2 cells, disrupting the cell tight junction permeability barrier and exacerbating the intestinal barrier injury inflammatory response caused by oxidative stress [29,30]. Similarly, it has been found that SiO2 CNPs have cytotoxic effects on macrophages at high concentrations [31]. However, self-assembled nanoparticles derived from porcine bone and freshwater clam had a protective effect on cells [23,32]. There was no significant difference in cell activity between the CNP treatments and the control, indicating that CNPs had no significant toxicity to Caco-2 cells (Figure 3A) or Raw 264.7 cells (Figure 3B), and could significantly promote the growth and proliferation of these cells when the concentration was 50–300 µg/mL. However, when the concentration of CNPs gradually increased; as can be seen from Figure 3, the cell activity showed a trend of decline. ## 3.4. Interactions of Caco-2 Cells and Raw 264.7 Cells with CNPs Nile red is a lipophilic fluorescent dye that can be used for CNPs containing abundant lipids, and its reliability has been widely verified [7]. As observed in Figure 4, after incubation, almost every cell nucleus was wrapped in the CNPs, and all regions of the cell except the nuclear region emitted a red fluorescence, indicating that the CNPs were not only attached to the cytoplasmic membrane, but also engulfed by the cells. The obtained CNPs from duck soup have been proven with significant antioxidant capacity, and here, their absorption by cells through the endocytic pathway implies the potential to improve the antioxidant capacities of cells. ## 3.5. Determination of Membrane Potential and Mitochondrial Superoxide Content in Cells Oxidative stress is caused by an imbalance between reactive oxygen species (ROS) production and the antioxidant capacities of cells, which is a cellular state that is characterized by an excessive production of ROS [33,34]. ROS are produced by aerobic cells during metabolism, and the overproduction of ROS can cause cellular damage to intestinal epithelial cells [35,36]. Therefore, the body’s antioxidant system needs exogenous antioxidants to effectively avoid the occurrence of oxidative stress. According to several reports, the reduction in oxidative stress prevents intestinal barrier deterioration and lowers inflammatory reactions inside the gut [37,38,39]. AAPH is a free radical initiator that can release hydroxyl radicals upon the stimulation of cells, thus causing oxidative stress and some damage to cell membranes. High concentrations of AAPH can severely damage cells, causing oxidative stress and further activating uncoupling proteins on the mitochondria, leading to a decrease in mitochondrial respiration rate and thus reducing intracellular free radical levels [40,41]. In the absence of AAPH-induced cell damage (AAPH-), the fluorescence intensity of the groups in Caco-2 cells with additional CNPs was comparable to that of the control group, with no discernible differences based on the green fluorescence of DiBAC4[3] in Figure 5A. The relative fluorescence units (RFU) of Caco-2 cells were found to be much lower than those of the control group when the concentration of CNPs was 1000 µg/mL, as shown in Figure 5B. In Raw 264.7 cells, it was noticed that the fluorescence intensities of the groups with additional CNPs did not change substantially from that of the control group, based on the green fluorescence of DiBAC4[3] in Figure 5C. Moreover, the RFU of the groups added with various concentrations of CNPs did not differ noticeably from the control group in Figure 5D. When the AAPH inducer was added to the cells, as shown in Figure 5 (AAPH+), the green fluorescence in Caco-2 cells and Raw 264.7 cells was extinguished, while a decrease in RFU could be observed. However, the fluorescence was significantly restored by the addition of CNPs, probably due to the alleviation of cellular damage caused by AAPH radicals, which significantly restored the cellular membrane potential and thus counteracted the hyperpolarized state of the cellular membrane caused by extracellular hydrogen peroxide radicals. For Caco-2 cells, the presence of AAPH has been reported to cause an increase in cell permeability, which can be reduced by CNPs [32]. CNPs can also protect the macrophage cytoplasm and membrane from AAPH-induced oxidative damage [20]. Therefore, it can be inferred that CNPs in the appropriate concentration range extracted from duck soup would rather protect than damage the membranes of Caco-2 and Raw 264.7 cells under oxidative stress. The mitochondrion is the main site of ROS production in cells, and also the target organ of cellular oxidative stress damage [34]. Mito-Sox *Red is* a specific fluorescent indicator for the detection of reactive oxygen ROS levels, and its fluorescence intensity is proportional to the ROS concentration. As shown in Figure 6A,C, there was no difference in the red fluorescence when Caco-2 cells and Raw 264.7 cells ingested the CNPs (AAPH-). As shown in Figure 6B,D, the fluorescence intensities of Caco-2 cells and Raw 264.7 cells were not significantly different from those of the control group, indicating that the CNPs in duck soup had almost no effect on mitochondrial reactive oxygen radicals. When the cells were subjected to AAPH radical-induced damage (AAPH+), as observed in Figure 6A,C, the fluorescence of Caco-2 cells and Raw 264.7 cells almost disappeared, and the strong fluorescence could hardly be seen under the microscope, which indicated that AAPH radicals could resist the oxygen respiration in mitochondria and the production of ROS. When CNPs were added to co-incubate with the cells, the red fluorescence in the cells was significantly restored compared to the control group, counteracting some of the inhibition of mitochondrial ROS by AAPH and increasing the production of ROS, indicating that CNPs could relieve the oxidative stress of cells. In Figure 6B, different concentrations of CNPs could restore the intracellular fluorescence intensity of Caco-2 cells compared to the control group, except for 1000 µg/mL of CNPs, which had no such effect. As shown in Figure 6D, compared with the AAPH group, different concentrations of CNPs significantly increased the intracellular fluorescence intensity of Raw 264.7 cells and promoted ROS proliferation in mitochondria. It was speculated that when the concentration of CNPs exceeds 1000 µg/mL, it might cause toxic damage to the cells, which in turn impaired the mitochondrial function, which was consistent with the detection of the effects of CNPs on the cell membrane potential. The experimental results showed that 100 µg/mL and 500 µg/mL CNPs could effectively maintain mitochondrial oxygen respiration and shield cells from the oxidative harm brought on by hydrogen peroxide radicals. Interestingly, duck meat is considered to have a pyretolysis effect on the body in Chinese folk and Chinese medicine, and duck soup is highly popular [42]. Further investigations revealed that the consumption of duck meat reduced energy metabolism in rats [43]. In this study, CNPs extracted from duck soup benefited the growth of macrophages and intestinal epithelial cells, and had the effect of alleviating the oxidative stress of the cells, which has implications for explaining the potential antioxidant benefits of duck meat and soup. It is worth mentioning that further tests should be carried out in mice to validate the function of CNPs in duck soup. ## 4. Conclusions In conclusion, this study successfully extracted bioactive colloidal nanoparticles from duck soup, verifying their antioxidant activity. In a suitable concentration range, the CNPs were able to interact directly with RAW 264.7 cells and Caco-2 cells, and alleviate their cellular damage when exposed to oxidative stress. This study will contribute to the extraction and application of food-derived CNPs for better efficacy, and promote new attempts at nanotechnology in the food field.
# Healthcare Resource Utilization in Patients with Newly Diagnosed Atrial Fibrillation: A Global Analysis from the GARFIELD-AF Registry ## Abstract The management of atrial fibrillation (AF), the most common sustained arrhythmia, impacts healthcare resource utilization (HCRU). This study aims to estimate global resource use in AF patients, using the GARFIELD-AF registry. A prospective cohort study was conducted to characterize HCRU in AF patients enrolled in sequential cohorts from 2012 to 2016 in 35 countries. Components of HCRU studied were hospital admissions, outpatient care visits, and diagnostic and interventional procedures occurring during follow-up. AF-related HCRU was reported as the percentage of patients demonstrating at least one event and was quantified as rate-per-patient-per-year (PPPY) over time. A total of 49,574 patients was analyzed, having an overall median follow-up of 719 days. Almost all patients ($99.5\%$) had at least one outpatient care visit, while hospital admissions were the second most frequent medical contact, with similar proportions in North America ($37.5\%$) and Europe ($37.2\%$), and slightly higher in the other GARFIELD-AF countries ($42.0\%$; namely Australia, Egypt, and South Africa). Asia and Latin America showed lower percentages of hospitalizations, outpatient care visits, and diagnostic and interventional procedures. Analyses of GARFIELD-AF highlighted the vast AF-related HCRU, underlying significant geographical differences in the type, quantity, and frequency of AF-related HCRU. These differences were likely attributable to health service availability and differing models of care. ## 1. Introduction Atrial fibrillation (AF) is the most common arrhythmia and, with its progressively increasing prevalence, impacts public health and healthcare resource utilization (HCRU) [1,2]. AF affects approximately 37.5-million adults worldwide with about 400 new cases per 1-million inhabitants diagnosed annually [3]. AF patients are at increased risk for stroke and suffer an increase in morbidity and mortality [4]. AF’s association with hypercholesterolemia, diabetes mellitus, arterial hypertension, chronic kidney disease (CKD), dementia, obesity, and sleep apnea may confer a negative prognosis [4,5]. AF’s association with healthcare resource utilization (HCRU) presents a large economic burden [6]. AF is estimated to account for more than $1\%$ of total healthcare expenditures in high-income countries, mostly attributable to hospitalization [7]. Other resource use and cost contributors include medical visits, emergency room (ER) admissions, and diagnostic and interventional procedures often required by AF patients (e.g., electrocardiography, laboratory tests, cardioversion, catheter ablation, etc.) [ 5,6,7,8]. Several studies have evaluated the multiple aspects of AF, including its HCRU burden, studied according to specific contexts, settings, or treatment options [9,10,11,12,13]. The objective of this study was to characterize the global HCRU in AF patients within the Global Anticoagulant Registry in the FIELD-AF (GARFIELD-AF). The GARFIELD-AF registry defines a non-interventional, observational study that characterized a global population of non-valvular AF patients. This multicenter global registry documented patients’ and sub-populations’ baseline characteristics, treatment strategies, and outcome measures by including five prospective cohorts of adult subjects who were newly diagnosed with non-valvular AF (diagnosed within the previous six weeks before enrolment) and having at least one additional risk factor for stroke. GARFIELD-AF also included a validation cohort of retrospective patients diagnosed with non-valvular AF between 6 and 24 months prior to enrolment [14,15]. ## 2.1. Study Design and Data Source A prospective cohort design was used to characterize resource utilization associated with the care of AF patients. The study investigated the GARFIELD–AF registry, an observational worldwide registry that prospectively and consecutively enrolled sequential cohorts of 52,167 newly diagnosed AF patients at risk of stroke from December 2009 to August 2016 in 35 countries. Eligible patients were aged 18 years or older, enrolled consecutively into five cohorts (representing seven years of enrollment, from 2010 to 2016) including ~10,000 participants each; the additional retrospective cohort (GARFIELD–AF Cohort 1) was excluded. Participants with a follow-up period of less than three months were excluded from the analysis. Data were extracted from the final study database lock (June 2019) in 2020. The GARFIELD–AF study design has been reported elsewhere [14,15]. Baseline patient characteristics—including demographic information, clinical conditions, risk stratification, and antithrombotic treatment—were collected at inclusion in the registry [14,15]. Risk stratification was documented through CHA2DS2-VASc (congestive heart failure, hypertension, age ≥ 75 years [doubled], diabetes, stroke [doubled], vascular disease, age 65–74 years, and sex category [female]). Follow-up data on treatments and outcomes were collected at four monthly intervals up to 24 months. GARFIELD–AF data were captured using an electronic case report form (eCRF) designed by Dendrite Clinical Systems Ltd. (Henley-on-Thames, UK). Oversight of operations and data management were performed by the coordinating center (Thrombosis Research Institute, London, UK). The study is registered at ClinicalTrials.gov (unique identifier: NCT01090362). Patients were selected from multiple healthcare settings and were registered by the identifying clinician registered using the eCRF. Data were collected from five clinical sources associated with the patient (i.e., hospital, emergency department, anticoagulation clinic, stroke unit, and office-based settings such as general or family practitioners, cardiologists, and internists) through a review of patient notes and clinical records [14]. Data on HCRU and changes in medication treatment were stored in a dedicated follow-up and events dataset. ## 2.2. Outcomes Measures and Definitions HCRU in AF patients was evaluated focusing on medical contacts and is reported as the proportion and frequency of at least one event (besides the recruitment visit, which was excluded from the analysis). Events included in the analysis of HCRU studied were those linked to AF and its sequalae and collected during follow-up visits as per study protocol and according to standardized outcome definitions [14]. Studied HCRU items include hospital admissions, outpatient hospital attendance, ER admissions, family doctor visits, stroke unit admissions and office-based specialist visits, and diagnostic and interventional procedures occurring during the follow-up period. General practitioner visits, office-based specialist visits, and hospital-based outpatient visits were grouped as “outpatient care visits,” to adequately compare information from different countries and settings. Diagnostic and interventional procedures covered all those derived from follow-up events, including those specific to AF (such as electrical cardioversion and ablation), methods for pulmonary embolism diagnosis (e.g., computed tomography scan, magnetic resonance imaging scan, and invasive angiography) and interventions required for cardiovascular diseases (including percutaneous coronary intervention [PCI] bare metal stent, PCI drug eluting stent, PCI balloon angioplasty, coronary artery bypass graft, valve replacement, pacemaker, and carotid stent). Data on medication use were not included in this study, as this has been evaluated in previous analyses of the GARFIELD-AF registry [16,17,18]. For the purpose of this analysis, patients were divided into two groups according to the enrolment cohorts, which allowed to account for possible differences in HCRU over the whole study period. Group A included participants recruited into GARFIELD–AF Cohorts 2 and 3 from 2010 to 2013; Group B included those in Cohorts 4 to 6 from 2013 to 2016. In particular, we split patients into two 3-year timeslots since, by Cohort 3, the non-Vitamin K antagonist oral anticoagulants (NOAC) were approved in most of the countries included in the GARFIELD-AF registry. In addition, there has been an increased use in newly diagnosed patients with AF receiving guideline-recommended treatment [18]. The 35 countries within the registry were grouped by geographical region, according to the classification provided by the GARFIELD–AF dataset used: Asia (China, India, Japan, Korea, Singapore, Thailand, Turkey, and United Arab Emirates), Europe (Austria, Belgium, Czech Republic, Denmark, Finland, France, Germany, Hungary, Italy, The Netherlands, Norway, Poland, Russia, Spain, Sweden, Switzerland, Ukraine, and United Kingdom), Latin America (Argentina, Brazil, Chile, and Mexico), North America (Canada and United States), and other GARFIELD–AF countries (Australia, Egypt, and South Africa, henceforth defined as “others”). ## 2.3. Statistical Analysis Continuous variables were described with mean and median as central tendency measures, while standard deviation (SD) and interquartile range (IQR) were described as dispersion measures. Categorical variables have been presented using frequency and percentage. The Student’s t test was used to assess differences between continuous variables, and the Chi square (χ2) or Fisher’s exact tests were used when needed to assess differences between categories. AF-related HCRU was reported as the percentage of patients having at least one event included in the analyses and quantified as rate-per-patient-per-year (PPPY). Region-specific HCRU rates were subsequently compared using a Poisson regression model, which was adjusted for possible known confounders and modifiers collected in the registry, such as sex, age at enrolment, type of AF (i.e., (i) paroxysmal: AF that lasts less than 7 days and resolves spontaneously or with intervention; (ii) persistent: AF episode that continues for more than 7 days, irrespective of whether the episode was terminated by cardioversion or if it self-terminated; (iii) permanent: when AF is accepted by the patient (and physician) and a rate-control strategy is needed; or (iv) new onset—unclassified), AF therapy (i.e., antiplatelet [AP], alone or in combination with Vitamin K antagonists [VKA] or NOAC), comorbidities, prior transient ischemic attack, prior bleeding, CHA2DS2-VASc score, country income level (i.e., high, upper-middle, or lower-middle), and healthcare system payer (i.e., single payer, universal public insurance, public-private insurance, or private insurance). Results are expressed as incidence rate ratios (IRR) with $95\%$ confidence intervals ($95\%$ CI). All p-values were two-sided, with values <0.05 considered statistically significant. Analyses were performed using STATA statistical software version 13.1 [19]. ## 3.1. Baseline Sample Characteristics This study involved a total of 49,574 patients, with an overall median follow-up period of 719 days (IQR, 597–730). The cohort was mainly constituted of subjects enrolled in Europe ($56.5\%$) and Asia ($28.4\%$), with a median age of 71 years (IQR, 63–78). More than half of the participants were men ($55.7\%$). Prior bleeding was reported in $2.5\%$; transient ischemic attack in $4.4\%$; diabetes in $22.4\%$. A complete overview of patient demographics and clinical characteristics, according to region, is listed in Table 1. Differences according to the cohorts of enrollment are presented in Table 2. The two cohort groups differed in almost all the baseline clinical characteristics. ## 3.2. Healthcare Resource Utilization The vast majority of patients ($99.5\%$) had at least one outpatient care visit, excluding the enrollment medical contact. Hospitalization was the second-most frequent medical contact, with almost one-third ($30.4\%$) of patients having at least one hospital visit. Higher proportions of patients with more than one hospitalization were observed in North America ($37.5\%$), Europe ($37.2\%$), and others ($42.0\%$). Of these, stroke unit admissions accounted for around $1\%$ in all groups. Higher numbers of procedures and ER admissions were registered in North America with, respectively, $25.1\%$ and $31.0\%$ patients. A lower proportion of patients with at least one ER admission was seen in Asia ($8.1\%$), and a lower proportion of ER procedures was recorded in Latin America ($7.5\%$). Table 3 reports the number of GARFIELD–AF participants with at least one HCRU event. The cohort groups’ PPPYs results aggregated by region are presented in Figure 1 and Figure 2. Outpatient care visits were the most frequent event in both groups (i.e., participants enrolled between 2010 and 2013 and those enrolled from 2013 to 2016). Large variations in the type of other medical contacts were observed across regions and between the two cohort groups. In Group A (GARFIELD–AF Cohorts 2 and 3), patients with higher PPPY rates for procedures and hospitalizations were in Europe and others, while lesser values for both events were seen in Asia. In the latter, the lowest PPPY rate for ER visits was also registered. Patients in Europe showed lower PPPY rates for outpatient visits (Figure 1). Overall, AF-related HCRU trends in Group B (GARFIELD–AF Cohorts 4 to 6) mirrored those in Group A, with narrower regional differences as compared with the previous (Figure 2). Compared with Europe, patients in North America showed higher HRCU rates for all medical contacts, and those in Latin America and Asia showed lower ones. ## 4. Discussion In this real-world observational study, we combined global data from the GARFIELD–AF registry on the estimated HCRU in AF patients and compared it among regions and over time. Our findings highlighted the extensive resources utilized in almost 50,000 subjects from 35 countries worldwide. Important disparities still exist in their utilization among patients in the five regions after controlling for various confounders, such as patients’ characteristics and clinical status, as well as societal aspects (for instance, country income level and healthcare system payer). All HCRU components showed an overlapping pattern across the five regions in the two study groups, but frequencies changed across cohorts. In particular, the analyses highlighted narrower regional differences in the second period along with the differences shown between the two groups as compared with the first one. Overall, these changes may indicate an increasingly progressive concordance with evidence-based guidelines for patients newly diagnosed with AF across countries, mirroring trends seen in previous GARFIELD–AF research. It appears that clinical practice and treatment of AF patients has become more uniform over time, likely due to a wider use of NOACs and specific AF procedures, such as electrical cardioversion and ablation [20,21]. As regards to the regional distribution of AF-related HCRU, a primary reason for these differences may be the availability of services and the differing models of healthcare and AF-care organization, beyond differences in healthcare system and payer [22]. In certain settings, gate-keeping systems—such as an initial visit to a general practitioner for access to specialist care or the presence of transitional care facilities—influenced the patient’s use of services. A previous analysis conducted in Latin American countries included in the GARFIELD–AF registry has suggested inadequate management of AF patients, with therapy underuse attributable to physician choice, difficulties in accessing healthcare, adverse economic conditions, and lower educational levels [23]. Additionally, access to primary and cardiology care in rural communities may be a recurring challenge for older and disabled AF adults, resulting in gaps in access to health services [24]. Another factor influencing HCRU variations across regions is inconsistent patient demographics, particularly the population age structure. These differing age structures may persist also after appropriate confounder correction [25]. Thus, greater numbers and frequencies of medical contacts may be at least partly attributed to the larger proportion of elderly people in some countries. Similarly, AF epidemiological metrics should be considered in the interpretation of our results. Although global rates are relatively stable, higher and more premature mortality due to AF was shown in low- and middle-income countries [4,23,26]. In contrast, a lower risk of death in Asia and Europe compared with other regions is a common observation, likely linked to the highly protective healthcare system and easier access to services in these regions [27]. Living in North America or Latin America was instead associated with a higher risk of early death [27]. A bias toward lower reported medical contacts may exist in countries where such services lack or are underused, resulting in a suboptimal level of care. When analyzing the type of healthcare contacts, it is worth noting that hospitalizations account for higher HCRU rates. Drivers for urgent and elective hospitalization in AF patients have been extensively described in the literature, and include cardiovascular and non-traditional risk factors, as well as considerable rates of readmission, particularly in comorbid, higher CHA2DS2VASc score, and post-ablated AF patients [28,29,30]. Overall, inpatient care is the main determinant of healthcare costs associated with AF. Thus, further research is needed to develop specific effective transitional and integrated care interventions [6,7,29]. In summary, although marked differences in resource use for AF patient care were observed worldwide, using the expansive GARFIELD–AF registry, our findings suggest that AF substantially contributes to resource consumption with a subsequent important impact on healthcare expenditure worldwide [2,29,31]. The management of AF is complex, and convergence towards guideline-directed care is crucial to maximize patient’s benefit from tailored treatment options. Yet, implanting integrated AF care models has been proven to reduce disease and resource burden of AF [29]. In this sense, our findings may serve as actionable indicators of novel value-based organizational approaches to support changes in the management of AF. This paper has a number of strengths and weaknesses. The design features of GARFIELD–AF registry include the random selection of sites and the enrolment of patients without exclusion according to comorbidities or treatment that ensures, respectively, the representativeness of the national care settings and population aimed to study, thus providing reliable estimates of research outcomes. Despite these strengths, this research should be interpreted in the context of its limitations. The study did not consider other possible unmeasured confounders, which may influence HCRU in AF patients. However, we included those mainly associated with the outcomes, and the use of robust statistical analysis allowed us to balance factors potentially correlated to such confounders. The reported burden of resource consuming was quantified excluding medication use, which was previously characterized in other GARFIELD–AF studies [16,17,18]. The differences in healthcare systems and organization across the countries included in the GARFIELD–AF registry may reflect variability in types, amounts, and patterns of HCRU events. ## 5. Conclusions Within the GARFIELD–AF registry, a vast amount of HCRU was documented in AF patients from 35 countries worldwide. Important geographical differences exist in the type, quantity, and frequency of HCRU in patients with AF. Changes in AF care and variable adherence to evidence-based guidelines determined different patterns of HCRU, with a trend toward convergence of clinical practices over time.
# A Single-Center, Randomized Controlled Trial to Test the Efficacy of Nurse-Led Motivational Interviewing for Enhancing Self-Care in Adults with Heart Failure ## Abstract Background: The role of nurse-led motivational interviewing (MI) in improving self-care among patients with heart failure (HF) is promising, even if it still requires further empirical evidence to determine its efficacy. For this reason, this study tested its efficacy in enhancing self-care maintenance (primary endpoint), self-care management, and self-care confidence after three months from enrollment in adults with HF compared to usual care, and assessed changes in self-care over follow-up times (3, 6, 9, and 12 months). Methods: A single-center, randomized, controlled, parallel-group, superiority study with two experimental arms and a control group was performed. Allocation was in a 1:1:1 ratio between intervention groups and control. Results: MI was effective in improving self-care maintenance after three months when it was performed only for patients (arm 1) and for the patients–caregivers dyad (arm 2) (respectively, Cohen’s $d = 0.92$, p-value < 0.001; Cohen’s $d = 0.68$, p-value < 0.001). These effects were stable over the one-year follow-up. No effects were observed concerning self-care management, while MI moderately influenced self-care confidence. Conclusions: This study supported the adoption of nurse-led MI in the clinical management of adults with HF. ## 1. Introduction Heart failure (HF) is a major public health concern worldwide, affecting approximately 1–$2\%$ of the global adult population [1]. HF is a clinical syndrome caused by several potential underlying etiologies and characterized by key symptoms such as dyspnea, ankle swelling, exhaustion, and clinical signs (e.g., peripheral edema) [2]. HF is associated with poorer quality of life, increased hospitalization rates, more health-related costs, and decreased overall survival in patients [1,3,4]. It also places health-related challenges on the well-being of informal caregivers, because it is associated with a reduced quality of life and health-related issues [5]. Patients with HF need to adhere to the recommended medication regimen and pay special attention to dietary sodium and liquids restrictions, exercise regimen, body condition monitoring, behaviors and mood control, accurate symptom detection, therapy impact evaluation, and other self-care behaviors [2,6]. These demands are often mismatched from the required self-care practices, as the self-care behaviors of adults with HF were extensively described as mainly inadequate [7,8,9,10,11,12]. Self-care is the decision-making process that includes behaviors that help maintain heart failure stability (self-care maintenance), allow patients to perceive symptoms (self-care monitoring), and manage signs and symptoms (self-care management) [13,14]. Self-care maintenance includes exercising (e.g., brisk walking), avoiding getting sick, medical adherence, and dietary and liquids adherence. Self-care monitoring is based on promptly recognizing the cardinal HF symptoms and signs (e.g., gaining weight, dyspnea, peripheral edema). Self-care management reflects patients’ knowledge and health literacy in decision-making when symptoms and/or signs occur. Overall, self-care behaviors are positively influenced by the patient’s perception of adequately performing demanding self-care behaviors (self-care confidence) [13,14]. Among the strategies to sustain adequate self-care in patients with HF, motivational interviewing (MI) showed promising results [15,16,17,18,19]. By exploring and resolving ambivalence, MI, a goal-directed and patient-centered counseling technique, assists individuals in improving their health-related behaviors [16,20,21,22]. The essential components of MI include showing empathy, creating discrepancies in the perceptions derived from interpreting the gap between expected behaviors and unhealthy performed ones, refraining from disagreements, promoting self-efficacy, and sustaining a shared strategy [23]. Individual psychosocial behavioral interventions utilizing MI showed improved medication adherence and high levels of participant satisfaction in several chronic conditions [16,24]. Recent studies show that nurse-led MI is safe and effective in pursuing behavioral changes in patients with chronic conditions because nurses are healthcare professionals who work closely with the patient’s needs, beliefs, and behaviors, and are able to detect misconceptions regarding clinical aspects [24,25]. A recent meta-analysis of nine experimental studies shows that MI has moderate effects on enhancing self-care confidence and self-care management and large effects on improving self-care maintenance [26]. Despite this important synthesis of evidence, the authors stated that more empirical and experimental research is still required to corroborate the efficacy of MI on self-care in patients with HF because of the current heterogeneity in the several included populations and the poor adoption of clinical trials measuring self-care with theory-grounded self-report scales [26]. In other words, more randomized controlled trials are required to close the gap of evidence that currently undermines the generalizability and transferability of the efficacy of MI in managing HF [27]. For this reason, this randomized clinical trial (RCT) aimed (a) to test the efficacy of nurse-led MI in enhancing self-care maintenance (primary endpoint), self-care management, and self-care confidence after three months from enrollment in adults with HF compared to usual care, and (b) assess changes in self-care over follow-up times (3, 6, 9, and 12 months). ## 2.1. Design This was a single-center, randomized, controlled, parallel-group, superiority study with two experimental arms and a control group. Allocation was based on a 1:1:1 ratio between intervention groups and control. The ClinicalTrial.gov identifier is NCT05595655. This study was approved by the Ethical Committee of San Raffaele Hospital (approval #74/INT). ## 2.2. Study Setting This study enrolled ambulatory patients in the Heart Failure Clinic of the IRCCS Policlinico San Donato in northern Italy. The focus of care ranges from prenatal diagnosis to rehabilitation, from newborns to the very elderly; the medical–nursing staff is specialized in several areas of cardiology, heart surgery, vascular surgery, and anesthesia with a high focus on clinical research [28]. IRCCS Policlinico San *Donato is* a reference center for cardiovascular diseases [29]. ## 2.3. Participants Participants were patients with HF who did not practice adequate self-care and their caregivers. Patients met the requirements for participation if they met the following criteria: (a) had a diagnosis of HF classified as New York Heart Association (NYHA) class II–IV; (b) had evidence of inadequate self-care determined by a score of 0, 1, or 2 on at least two items of the self-care maintenance or self-care management scales of the Self-Care of HF Index v.6.2 (SCHFI v.6.2) [30]; (c) were willing to sign the informed consent to be enrolled; and (d) with age ≥ 18 years. Patients who had a myocardial infarction during the previous three months and/or had severe cognitive impairment with a six-item Screener score between 0 and 4 [31] and/or residing in a nursing home where self-care was not required or had an informal caregiver who did not wish to be involved in the study were all excluded from the study. Informal caregivers were eligible to be enrolled if the patients confirmed them as the principal caregivers. Both were not eligible to be enrolled if either the patient or the caregiver refused to participate in the trial in the baseline period; however, if one participant left the study after enrollment, the other one was allowed to continue. Eligible dyads were enrolled after having received a clinician invitation letter stating the aim of the study and the procedure. ## 2.4. Experimental Arms A trained nurse with experience in educating patients with HF delivered MI. Four registered nurses were trained to participate in a 32 h training course on MI and 8 h refresh training on evidence-based care regarding HF. The registered nurses were females; two of them had a Master of Science in nursing, one was a doctoral student (PhD student) in nursing science, and one had a bachelor’s degree. The nurses’ average age was 28.75 years (standard deviation, SD = 5.12; range: 24–36). They had 5.75 years of work experience in cardiology (SD = 4.35; ranges, 2–12). The intervention included face-to-face nurse-led MI interventions that lasted around 30 min. The first MI had to be performed within 2 months from enrolment and followed by four other MI interventions at 3, 6, 9, and 12 months performed by the same interventionist. To strengthen the intervention and to sustain adherence to the protocol, the nurse who performed the MI contacted the patients via telephone three times during the first two months after MI. This scheduled approach for delivering MI 5 times during the study has never been tested in previous studies [16]. In arm 1, the MI was delivered only to patients; in arm 2, MI was delivered simultaneously to the dyad patient and caregiver. Participants enrolled in the experimental arms (arms 1 and 2) received MI interventions as an add-on approach to the standard of care. ## 2.5. Standard of Care and Control Group Standard of care included clinical visits in the outpatient settings every 6 to 12 months, depending on the severity of the patients’ HF conditions and their specific clinical pathways. Education in the standard of care was based on discussions with patients about relevant materials geared toward HF self-care. Patients in the control group received standard of care only. ## 2.6. Procedures A research assistant (outcome assessor) screened the patients using the SCHFI v.6.2 [30] and the six-item Screener [31] following the study protocol after patients and caregivers gave their consent. After the eligibility screening, when a patient was eligible, the protocol-required questionnaires were administrated to both patients and caregivers. They received questionnaires individually at baseline and at each follow-up, and they were not permitted to work together to complete the questionnaires. At 3, 6, 9, and 12 months after enrollment, follow-up data were collected via telephone. The outcome assessor was kept blind regarding the research arms at both the baseline and all follow-up points. Interventionists and participants were not blind to the study arm. ## 2.7. Randomization A web-based system generated the randomization sequence, assigning participants in a 1:1:1 ratio to either the intervention or control group using a simple randomized process. Allocation sequences were accomplished using computer-generated algorithms that were made available after the trial. The interventionists were not informed of the allocation sequence. The randomization process started after the site employees (study nurses) entered the patients’ information into the database (RedCap). Each randomization number was generated and sent by a study nurse to the interventionist (a trained nurse who performed the MI), who was not the professional who had to assess the outcomes. Each participant’s enrollment and follow-ups were always communicated to the trial coordinator. ## 2.8. Measurements The measurements for patients were socio-demographic and clinical characteristics. Socio-demographics were sex (male, female); age (years); marital status (single, married, divorced, widower); education (high schools or higher, lower than high schools); employment (active worker, retired); income (more than necessary to live, the necessary to live, and not the necessary to live). Clinical characteristics were NYHA class (II, II, IV functional class), Charlson comorbidity index (CCI, score) [32], ejection fraction (HFpEF = preserved ejection fraction; HFmrEF = midrange ejection fraction; HFrEF = reduced ejection fraction), time with HF (months), BMI (kg/m2), Montreal cognitive assessment (MoCA) (score) [33]. The outcomes of this study were the self-care maintenance scores measured using SCHFI v.6.2 [30]. ## Outcomes The SCHFI v.6.2 was used to assess the score of self-care maintenance at baseline, after 3 months (primary endpoint), and over the follow-up times. The SCHFI v.6.2 also allowed researchers to measure secondary outcomes: self-care management and self-care confidence at baseline and over the follow-up times. Each score has a range of 0 to 100. Higher scores indicate better self-care. Only if a patient had previously reported experiencing HF symptoms, such as dyspnea, did they have to fill out the self-care management scale. A score of less than 70 on each domain denoted adequate self-care. ## 2.9. Sample Size The pooled mean of self-care maintenance described using the SCHFI v.6.2 in two previous descriptive studies performed in northern Italy was 53.55, with a pooled standard deviation of 18.98 [34]. Previous studies showed that MI could improve the mean of self-care scores in patients with HF by increasing the mean scores with a delta (Δ) ranging from 6 to 15 (pooled mean Δ = 10.95) [26]. Therefore, 49 patients per arm were required to reject the two-tailed null hypothesis of equal mean scores between the study arms with a power of $80\%$. A sensitivity analysis considering slights variations in the Δ and accounting for 20–$25\%$ of attrition as per similar research [16] showed that a total of 180 ± 6 participants was necessary to preserve enough power ($80\%$) to detect significant mean differences between the experimental arms and the control group (60 ± 2 participants per arm). ## 2.10. Treatment Fidelity The trial coordinator evaluated treatment fidelity by randomly applying an evaluation of the performed MI in arms 1 and 2 using the Motivational Interviewing Treatment Integrity (MITI) Scale [35]. The MI interventions were all audio-recorded, and the MITI was used to randomly evaluate 4 MI interventions per arm at each time point. The scores ranged between 2 and 5, and the median of the assessments in both arms was 3, indicating an ideal technical quality score. ## 2.11. Timeline Enrollment required approximately 36 months (from May 2017 to May 2020; the study ended with the last follow-up in May 2021) to avoid overwhelming the activities of the involved staff in the study (i.e., four interventionists, a trial coordinator, two outcome assessors, a study nurse, a data manager, the principal investigator, and the co-investigators). The study was conducted at a cardiovascular hub center that remained operational during the COVID-19 pandemic waves. As a result, the researchers were able to conclude the study during the pandemic by leveraging the center’s ongoing interactions with heart failure patients. Figure 1 shows the patient flow. ## 2.12. Statistical Analysis All data were analyzed by using an intention-to-treat approach. Categorical variables were described in terms of absolute and relative frequencies. Interval and continuous variables were evaluated for normality using the Shapiro–Wilk test, and data with a normal distribution are presented using the mean and standard deviation (SD). The median and interquartile ranges (IQR) were used to summarize non-normally distributed data. Baseline characteristics were compared between arms to determine if they were equal. Missing scores in the outcomes were $12\%$, $10\%$, and $11\%$ (respectively, the extent of the missingness in arms 1, 2, and 3) in each arm, which were imputed by employing multiple imputations based on random effects models after having assessed that the missing mechanisms (missing in relation to time and study arm) and patterns (monotone missingness based on sensitivity analysis) supported the missing at random (MAR) assumptions. The delta (Δ) of the self-care scores was calculated at each follow-up period by subtracting the baseline self-care score (T0) to determine the changes in self-care scores during follow-up times (T1, T2, T3, and T4). As the primary endpoint was a significant improvement in arms 1 and 2 of self-care maintenance scores over the control group, a two-sample t-test was employed to compare the delta of self-care score in arms 1 and 2 versus the control arm 3, under the assumptions of the central limit theorem [36]. A similar approach was performed for each follow-up time and the secondary outcomes (self-care management and self-care confidence). Precisely, the t-test effect size estimates were computed using d statistics for independent t-tests (Cohen’s d), where d values lower than 0.5 indicated small effects, between 0.5 and 0.8 moderate effects, and greater than 0.8 large effects [37]. In addition to this approach, data on the primary and secondary outcomes at each follow-up time were summarized in adequate (scores equal to or greater than 70) or inadequate (scores lower than 70) and compared (arm 1 vs. arm 3; arm 2 vs. arm 3) using chi-square test or Fisher’s exact test when appropriate. Mixed models for repeated measures were used to analyze changes across time (from baseline to T4) in primary and secondary outcomes, following the strategy of a previous study [16]. As a dependent variable, these models included the outcome scores available from T0 to T4 for each patient in the study arm. By having included a random intercept in the models, the inter-dependence between self-care maintenance, management, and confidence on the same subject was addressed. The randomization arm (nominal variable) was included in the models as an independent variable, along with the baseline characteristics (i.e., age, sex, income, NYHA, CCI score, MoCA, time since diagnosis, ejection fraction, and self-care confidence). Furthermore, the slopes derived from the models were compared between arms 1 and 2 versus the slopes of arm 3 for each outcome. The significance level was set at 0.05 in all tests, and analytics were performed using Stata Statistical Software: Release 17 (StataCorp. 2021; College Station, TX, USA: StataCorp LLC). ## 3.1. Participants’ Characteristics Patients’ baseline characteristics, stratified and compared by arm, are shown in Table 1. No differences are found in relation to the baseline characteristics. The majority of patients were females (r in arms 1, 2, and 3: $51.1\%$, $55.0\%$, and $52.5\%$, respectively) as well as the majority of caregivers (in arms 1, 2, and 3: $73.4\%$, $69.5\%$, and $74.0\%$, respectively). In arms 1, 2, and 3, patients reported mean ages of 68.39 (SD = 12.14), 69.44 (SD = 6.71), and 71.08 (SD = 12.95), respectively. In arms 1, 2, and 3, caregivers reported mean ages of 56.28 (SD = 9.12), 59.44 (SD = 11.10), and 58.17 (SD = 9.08), respectively. In arms 1, 2, and 3, most of caregivers were married: $57.1\%$, $63.2\%$, and $61.50\%$, respectively; for patients, $54.1\%$, $45.0\%$, and $37.7\%$, respectively. Most patients and caregivers reported an educational status lower than high schools: for patients in arms 1, 2, and 3, $72.1\%$, $75.0\%$, and $73.8\%$, respectively; for caregivers, $59.1\%$, $63.7\%$, and $62.9\%$, respectively. Specifically, regarding patients, most of them answered that they have the necessary income to live. The median (IQR) time with HF was approximately 4 years in the three arms. The median (IQR) BMI indicated values within normal scores. Overall, most patients were in NYHA II class, with two comorbidities, an HF with preserved ejection fraction (HFprEF) and inadequate self-care maintenance and management scores. ## 3.2. Self-Care Maintenance (Primary Endpoint), Management, and Confidence at the First Follow-Up (T1, 3 Months) The increase in the self-care maintenance scores (primary endpoint) from baseline to T1 (3 months after enrolment) is higher in arms 1 and 2 compared to arm 3 (Figure 2). In arms 1, 2, and 3, the mean Δ indicating an increase in the self-care maintenance score is 12.84 (SD = 11.50), 10.81 (SD = 13.05), and 2.78 (SD = 10.33), respectively, indicating a large effect size in the Δ between arm 1 and arm 3 (Cohen’s $d = 0.92$, p-value < 0.001), and moderate effect size in the Δ between arm 2 and arm 3 (Cohen’s $d = 068$, p-value < 0.001). Regarding self-care management scores, no differences are found between arm 1 and arm 2 versus arm 3 (see Table 2). Conversely, regarding self-care confidence scores, only the increased scores observed in arm 2 are significantly higher than those in arm 3, with a moderate effect size (Cohen’s $d = 058$, p-value = 0.002). The comparisons of the dichotomized scores into adequate (scores ≥ 70) and inadequate (scores < 70) do not show significant differences for each outcome (see Table 2). ## 3.3. Changes in Self-Care Maintenance, Management, and Confidence over Follow-Up Times The description of self-care maintenance, self-care management, and self-care confidence scores over time are reported in Figure 2 and Table 2. In relation to self-care maintenance, we generally find stability since 1 year (T4) of the effects detected at T1 (after 3 months). No differences are found in relation to self-care management scores over time. Conversely, regarding self-care confidence, at T1 and T2, moderate–small improvements are observed in arm 2 compared to arm 3; in arm 1, self-care confidence shows small–moderate improvements at T3 and T4 (see Table 2). The trends over time (from baseline to T4) derived from the mixed models in self-care maintenance, self-care management, and self-care confidence scale scores are shown in Figure 3. Regarding the self-care maintenance slopes, arm 1 and arm 2 versus arm 3 show significant differences (p-values = 0.038; p-values = 0.047, respectively). No differences are found concerning self-care management scores (p-values = 0.398; p-values = 0.447, respectively). Regarding self-care confidence, only the comparison between trends of arm 2 and 3 show significant differences (p-values = 0.031). These trends are confirmed when the mixed models are adjusted for age, sex, income, NYHA, CCI score, MoCA, time since diagnosis, ejection fraction, and baseline self-care confidence. ## 4. Discussion This study demonstrated that nurse-led MI performed using a scheduled approach (every three months over one year) was effective in improving self-care maintenance with stable effects over the follow-up times. The scheduled approach used to deliver MI in this study is a significant innovation, since no previous randomized controlled trials have utilized a similar approach [6,16,26,38,39,40,41]. This approach allows for a more structured and consistent delivery of motivational interviewing to participants, which may enhance its effectiveness. In this study, nurse-led MI also improved self-care confidence, with some differences when the intervention was performed only for patients (arm 1) or for the dyads of patients and caregivers (arm 2). Overall, the results derived from this RCT corroborate previous evidence [6,16,26,38,39,40,41], adding additional insights regarding five main aspects: (a) the nurse-led MI performed with scheduled recurrences over time likely produces stable effects in improving self-care maintenance over time; (b) the characteristics of HF (e.g., NYHA class or ejection fraction) seem to play a non-significant role on influencing the efficacy of MI in improving self-care maintenance over time; (c) the effects of MI performed only for patients seemed to be more stable over the effects showed by performing MI to the dyads in a different way from the effects shown in a previous study [16]; (d) the role of MI in improving self-care management remains unclear; (e) self-care confidence seems positively influenced by MI. The efficacy of nurse-led MI on self-care maintenance has important clinical implications because it means that aspects such as treatment adherence, which are highly problematic among patients with HF, might be susceptible to significant improvements when trained nurses employ MI in clinical practice. It is not surprising to find that the nurse-led MI effectively leads patients toward behavioral change [42,43,44]. In this regard, the key features of MI, such as adopting open-ended questions, affirmation of patients’ strengths, adopting reflective listening, and summarizing key points of the discussion, have the potential to be effective in patients with several clinical conditions, from individuals with HFprEF to patients with HFrEF. The more stable effects on improving self-care maintenance shown in arm 1 over arm 2 may be explained by the nature of the training performed by the interventionists, which was mainly focused on the elements of MI per se and a brief refresh about evidence-based care for patients with HF rather than focusing on providing the skills to manage the complexity of the dyadic relationships during the MI. In other words, it is reasonable that interventionists found it easier to perform the MI only for patients rather than simultaneously managing the dyad as required in arm 2. In this regard, we have to acknowledge that a previous multicentric RCT found that the effects of MI performed for the patient–caregiver dyad were larger than the MI performed only for patients [16]. From a theoretical perspective, if we consider the contribution of caregivers to the self-care practices of patients with HF [45,46,47], the MI performed for the dyad should be the best option. However, the evidence from this study points out that delivering MI to the dyad should be based on different training from the one designed only to provide skills for delivering MI-based interventions because the complexity of the dyadic relationship should be included in educating the interventionists. Among self-care behaviors, self-care management practices seem to be less susceptible to changes than self-care maintenance. This aspect is theoretically explainable by the role of several aspects that determine self-care management, such as disease-specific knowledge and, broadly speaking, health literacy [48,49,50]. In fact, self-care management reflects different individual-level characteristics into actions, from values, beliefs, knowledge, and so on, to behaviors that reflect a decision-making process triggered by the detection of signs and/or symptoms [51,52,53]. Considering these aspects, it is reasonable to think that self-care management requires complex and multiple interventions to be modified (e.g., psychosocial interventions combined with knowledge-based education and MI). Therefore, complex and multiple interventions should aim to affect the main determinants of self-care management rather than self-care management per se. Self-care confidence is also susceptible to improvement after MI interventions. Considering that self-care confidence is one of the strongest predictors of self-care behaviors [51,54,55], this result might have interesting clinical implications because improving self-care confidence may trigger virtuous circles to improve several other health-related outcomes. The differences emerging between the two experimental arms of this study (i.e., arm 1 shows effects after six months, while arm 2 shows brief-term effects) require more investigations with future studies and might reflect the complexity of managing MI in a dyadic setting. This study has several limitations. First, the single-center design limits the generalizability of the results. Second, the self-report scale used to assess primary and secondary outcomes (SCHFI v6.2) was the best option when the protocol of this RCT was written; however, it is currently outdated because the new SCHFI v7.2 is psychometrically more robust and allows researchers to assess self-care monitoring. Third, patient attrition over the trial was considerably large ($19.3\%$ at T4); this aspect requires further mitigation strategies in future studies and a more robust approach to ensure patient adherence to the protocol. Four, the poor focus of the educational course for educating the interventionists regarding managing the complexity of the dyadic relationships between patients and caregivers might be considered a source of bias, especially in interpreting the effects of arm 2. Finally, it is important to interpret the stability of the effects observed in relation to self-care maintenance with caution, given the repeated MI in the experimental procedure. While the results of this study suggest that the repeated MI approach may produce stable effects over a one-year follow-up period, it is important to consider that individual patients may respond differently to repeated interventions. Therefore, the generalizability of the findings to all patients with heart failure should be approached with caution. ## 5. Conclusions Nurse-led MI shows efficacy in improving self-care maintenance in patients with HF over a one-year follow-up. This RCT confirms previous evidence and supports the adoption of nurse-led MI in the clinical management of HF. Future research should corroborate this evidence in specific subgroups to enhance the external validity of this intervention and should explore the effects of nurse-led MI on clinical outcomes.
# Patterns and Trends in Pharmacological Treatment for Outpatients with Postherpetic Neuralgia in Six Major Areas of China, 2015–2019 ## Abstract The aim of this study was to assess the patterns and trends of pharmacological treatment for outpatients with postherpetic neuralgia (PHN) in China in the period 2015–2019. Prescription data for outpatients with PHN were extracted from the database of the Hospital Prescription Analysis Program of China according to the inclusion criteria. The trends in yearly prescriptions and corresponding costs were analyzed and stratified by drug class and specific drugs. A total of 19,196 prescriptions from 49 hospitals in 6 major regions of China were included for analysis. The yearly prescriptions increased from 2534 in 2015 to 5676 in 2019 ($$p \leq 0.027$$), and the corresponding expenditures increased from CNY 898,618 in 2015 to CNY 2,466,238 in 2019 ($$p \leq 0.027$$). Gabapentin and pregabalin are the most commonly used drugs for PHN, and more than $30\%$ of these two drugs were combined with mecobalamin. Opioids were the second most frequently prescribed drug class, and oxycodone accounted for the largest share of the cost. Topical drugs and TCAs are rarely used. The frequent use of pregabalin and gabapentin was in accordance with current guidelines; however, the use of oxycodone raised concerns about rationality and economic burden. The results of this study may benefit the allocation of medical resources and management for PHN in China and other countries. ## 1. Introduction Postherpetic neuralgia (PHN) is a chronic complication of herpes zoster (HZ) caused by damage to peripheral nerve tissue during the onset of herpes zoster, which is defined as obvious pain 3 months after herpes zoster [1]. PHN is often described as burning pain, tingling or itching, and its pain score is always ≥4 on a 10-point visual analog scale [2,3]. The burning and allodynia pain of PHN in the thoracolumbar region are more intensive, while the tingling and numbness of PHN in the face are more intense [4]. Approximately $20\%$ of herpes zoster patients develop PHN [5]. In the United States, the total incidence rate of PHN is 57.5 cases per 100,000 person-years [6]. A study on herpes zoster and its complications in China reported that the incidence rate of PHN was 0.48 per 1000 person-years [7]. The pain caused by PHN is often unbearable, which impairs the work of many employees. This is equivalent to an annual indirect loss of CNY 28,025 (USD 4221), which not only leads to the loss of personal wages but also has a wider economic impact on society as a whole through productivity loss [8,9]. There are many treatment options for PHN, mainly including the most widely used pharmacological therapy and nonpharmacological methods such as nerve blocks, neuromodulation and nerve stimulation [10,11]. In the field of pharmacological treatment, some anticonvulsants, antidepressants, opioids and local therapeutic drugs have been proven to be able to relieve pain. Among the drug recommendations in many countries, gabapentin, pregabalin and TCAs are often used as first-line drugs, while topical drugs and opioids are also often suggested in treatment [1,10,12]. At present, we have known that many drugs, such as gabapentin and pregabalin, are widely used in clinical treatment [13,14,15]. However, side effects of PHN drugs are common, and some drugs, especially opioids, have a potential risk of addiction [16]. Currently, little is known about the status of PHN drug application in China. Considering the increasing prevalence of herps zoster in China [17], we designed this cross-sectional study to analyze the patterns and trends of PHN drug use, as well as its costs. ## 2.1. Study Design and Ethics This study was a retrospective prescription-based cross-sectional study, and informed consent was waived as part of the approval. Ethical approval was obtained from the Ethics Committee of Run Run Shaw Hospital, College of Medicine, Zhejiang University (Reference Number KEYAN20210924-33). ## 2.2. Data Source and Prescription Inclusion Prescription data were derived from the widely used database of the Hospital Prescription Analysis Cooperative Project of China for pharmaco-epidemic studies [17,18,19,20,21]. The database was initiated in 2003, and the following items of prescription were included in the database: prescription code, date of prescription issued, sex and age of patient, department of physician, hospital code, drug generic name, strength, price and cost of drug, diagnosis. Prescriptions for patients with a diagnosis of PHN were extracted, and those meeting the following criteria were included for analysis: [1] prescriptions written from 2015 to 2019; [2] prescriptions from hospitals situated in 6 major regions of China (Beijing, Shanghai, Hangzhou, Guangzhou, Chengdu and Tianjin) and participated in the program continuously; [3] prescriptions for adult outpatients (age > 18 years) diagnosed with PHN. Prescriptions with missing values were excluded from the analysis. ## 2.3. Analysis The prescriptions for patients with PHN were represented by the number of corresponding prescriptions per year. The annual cost was the sum of the total cost of PHN patients’ prescriptions. It should be noted that inflation factor or discount rate was not considered either. The trends in annual prescriptions and expenditures were analyzed, and further stratified and illustrated by drug class and specific drugs. The PHN-treated drugs were classified to analyzed as follows: [1] anticonvulsants, including gabapentin, pregabalin, carbamazepine, oxcarbazepine, lamotrigine, valproic acid, topiramate and other analogs; [2] antidepressants, including tricyclic antidepressants (TCAs) and other serotonin (5-HT) and norepinephrine (NE) reuptake inhibitors; [3] opioids, compound preparations containing opioids that are also classified as opioids; and [4] topical drugs, including capsaicin, lidocaine, flurbiprofen and diclofenac [11,13,14,22,23,24]. The trends in prescription numbers and costs for overall and individual drugs were assessed using the Mann–Kendall test. The trends in percentages were assessed using the log-linear test. The Wilcoxon signed rank test was used for the difference between the male and female prescription percentages. The average proportion and standard deviation of the combined use of gabapentin and pregabalin in five years were calculated. The trend package in R (version 4.2.1) software was used for statistical analysis. The statistical significance was set as $p \leq 0.05.$ ## 3.1. Demographic Characteristics of Patients and Overall Trends A total of 19,196 prescriptions were included in this study. Detailed demographic characterizations of patients with PHN prescriptions are shown in Table 1. The percentages of prescriptions for females were slightly higher ($$p \leq 0.043$$), and the proportion did not significantly change during the study period ($$p \leq 0.198$$). The yearly prescriptions and expenditures are shown in Figure 1A. The yearly prescriptions increased from 2534 in 2015 to 5676 in 2019 ($$p \leq 0.027$$), and the corresponding expenditures increased from CNY 898,618 in 2015 to CNY 2,466,238 in 2019 ($$p \leq 0.027$$). ## 3.2. Trends in Prescriptions and Cost of Drug Class and Specific Drug The yearly total prescriptions for four major classes of PHN drugs—anticonvulsants, antidepressants, opioids and topical drugs—increased during the study period (Figure 1B), and detailed prescription numbers are listed in Table 2. Anticonvulsants were the most frequently prescribed drug class, followed by opioids. Antidepressants and topical drugs were rarely used. Table 3 shows the costs and percentage of specific drugs. There was a certain difference between the trend in expenditure and the trend in prescriptions. The total costs of opioids were always higher than those of anticonvulsants, which had most prescriptions (Figure 1C). Gabapentin and pregabalin were the most frequently used drugs. Prescriptions of pregabalin increased rapidly ($$p \leq 0.002$$), with the largest increase of $379\%$ in 2018. Regarding second-line opioid drugs, the proportion of oxycodone prescriptions was large and continuously increasing ($$p \leq 0.031$$). For antidepressants, the number of prescriptions of traditional TCA amitriptyline was greater than the others. The topical drugs were mainly lidocaine and capsaicin. In anticonvulsants, the costs of pregabalin also increased. The average proportion of oxycodone costs per year was approximately $21.3\%$ of the total costs. This was the drug with the largest proportion of the annual amount, and the proportion was stable ($$p \leq 0.220$$). Among antidepressants, the total costs of duloxetine and venlafaxine were higher, while the total costs of amitriptyline were lower. ## 3.3. Trends in Combination of Drugs Gabapentin and pregabalin, as the first-line choice drugs, were combined with other drugs (Figure 1D). Mecobalamin was the drug most commonly used in combinations. On average, $36.7\%$ of gabapentin prescriptions jointly used mecobalamin, as well as $30.0\%$ of pregabalin prescriptions. ## 4. Discussion This is the first study to analyze the patterns and trends in pharmacological treatment for outpatients with PHN in China. The yearly prescriptions and costs of PHN drugs have been increasing. Two anticonvulsant drugs—gabapentin and pregabalin—were the most commonly used drugs, which was in line with current practice guidelines. At the same time, we also found that oxycodone in opioids was used in large quantities and costs in a large proportion, which might be an unreasonable use in the treatment of PHN. The percentages of antidepressants and topical drugs were relatively low, in both prescriptions and corresponding costs. Regarding the combination of gabapentin and pregabalin, we unexpectedly found that mecobalamin was used more frequently. The prescriptions for patients with PHN increased during the study period. In China, $7.26\%$ of herpes zoster patients have PHN, and the incidence rate in women is slightly higher than that of men ($7.45\%$ vs. $7.03\%$) [7], which is consistent with the results of our study. In addition, the number of people diagnosed with herpes zoster continued to increase from 2015 to 2019 [17]. Therefore, the increase in the number of people diagnosed with PHN might be related to the progressive rise of prescriptions for PHN. At present, the treatment of PHN is based on symptom control, and many studies have proven that antiviral drugs for herpes zoster have no significant effect on PHN or its prevention. Therefore, the treatment of PHN usually follows the principle of neuralgia treatment [25,26]. In the Chinese guidelines, the first-line drugs include pregabalin and gabapentin, TCAs (such as amitriptyline, etc.) and $5\%$ lidocaine patches, and the second-line drugs include opioids [27]. The first-line treatment of PHN in the United States includes TCAs, gabapentin and pregabalin, and a topical lidocaine $5\%$ patch. Opioids and capsaicin patches are recommended as second-line or third-line therapeutic drugs [14]. The French guidelines for neuralgia regard TCAs and other serotonin-norepinephrine reuptake inhibitors (duloxetine and venlafaxine, etc.), and gabapentin as first-line drugs for the treatment of neuralgia, with pregabalin, weak opioid tramadol and capsaicin patches recommended as second-line drugs, and other powerful opioids as third-line drugs [13]. According to the Canadian Pain Society consent statement, gabapentin, TCAs and serotonin-norepinephrine reuptake inhibitors are first-line drugs for the treatment of neuropathic pain. Opioids are recommended as second-line drugs, while cannabinoids are newly recommended as second-line drugs [15]. *In* general, gabapentin, pregabalin and TCAs are often used as first-line drugs, while opioids are not the first choice for PHN. Thus, the use of most frequently prescribed anticonvulsant, mainly gabapentin and pregabalin, was in accordance with current guidelines and evidences. Regional differences in drug use were not significant in this study. Gabapentin and pregabalin, which are used most, are voltage-gated cation channel regulators [28,29]. Daily doses of 1800 mg to 3600 mg of gabapentin can provide patients with effective pain relief levels [30]. *The* general dosage of pregabalin for the treatment of PHN is between 75 mg and 600 mg per day, which is taken two to three times per day [31]. In our study, the treatment time of gabapentin single prescription is generally about 16 days, and the single oral dose is between 100 mg and 1500 mg, two to four times a day. The treatment time of pregabalin in a single prescription varies greatly, and the single oral dose is between 75 mg and 300 mg, one to three times a day. It is similar to the recommended dosage. The use of pregabalin had a significant increase during the study period, and its prescriptions exceeded gabapentin in 2018. Gabapentin and pregabalin are recognized as drugs with good relief effects on PHN [32]. The increase in the use of pregabalin may be due to the following reasons. The first is the difference in the results of drug action. Omar et al. ’s study on the difference between pregabalin and gabapentin initially showed that pregabalin was better at alleviating pain, while gabapentin had better effects on anxiety, insomnia and fatigue symptoms [33]. Previous studies have confirmed that pregabalin is highly effective and safe for patients with PHN in China [34]. Additionally, the widespread use of gabapentin and pregabalin calls special focus to the effective management of its use, as these drugs also have side effects. An overdose of gabapentin and pregabalin will produce euphoric effects and can lead to delirium [35]. Compared with pregabalin, the abuse of gabapentin is a growing trend. A British survey found that the proportion of lifetime gabapentin abuse was $1.1\%$, compared with $0.5\%$ in pregabalin [36,37]. Another reason for this may be related to the expiration of the patent. According to the database of the China Pharmaceutical Industry Information Center, the patent protection date of pregabalin expired in 2018. Although the brand of pregabalin used by patients has not changed in the past five years, the expiration of the patent has increased the attention it has received in wider society, especially for PHN patients, medical institutions and related pharmaceutical companies. More prescribers realize that the role of pregabalin in PHN may be better than that of gabapentin, so they are more willing to prescribe pregabalin. Other anticonvulsant drugs, such as oxcarbazepine, have proven to have no better therapeutic effect than both gabapentin and pregabalin on neuralgia, and their use is rare [38]. Opioids are widely used in pain control, and oxycodone is the drug with the largest number of prescriptions in the current study. Oxycodone is a semisynthetic μ- and κ-opioid receptor agonist with a wide range of applications [39]. Some studies have also shown that the use of oxycodone is not completely beneficial to the treatment of PHN [29,40]. A study by Gaskell et al. on oxycodone in the treatment of neuralgia suggested that there was no reported result within the scope of their study that can prove that oxycodone has substantial benefit results, such as the overall impression of clinical changes in the treatment of neuralgia [41]. Thus, although opioids were recommended as second-line treatment for PHN, oxycodone was not recommended, or only a very weak recommendation. However, oxycodone has the advantages of long duration of action and no histamine release or ceiling effect compared with other opioids, so it is still used frequently [39]. Compared with the status of other countries, the study by Gudin et al. found that $21.6\%$ of PHN patients received opioids as initial treatment for PHN in the United States, while among the other first-line treatment methods of PHN, gabapentin was $15.1\%$, pregabalin was $3.3\%$ and TCAs were $2.5\%$, which proved that excessive use of opioids was common [42]. Opioids are prone to cause peripheral nerve injury, which leads to increased noxious hypersensitivity, various adverse reactions and drug interactions [43,44,45]. It can also be seen from the conclusion that the cost of oxycodone accounts for a large proportion of overall expenditure and its spending has been sustained at a high level over the five years period of the study. Therefore, the widespread use of oxycodone has raised concerns about rationality and the economic burden on patients. For this phenomenon, the relevant departments should maintain a high degree of vigilance and remind prescribers to reduce or limit the use of related addictive drugs if necessary. Prescribers should evaluate the pain degree of patients before using drugs, and relevant departments can set different indicators of analgesic use for different pain levels, so as to re-evaluate whether opioid analgesics should be used to manage the PHN. The use of antidepressants is far lower than that of anticonvulsants and opioids, which reflects physician behavior and patient preference. Antidepressants have a certain relieving effect on PHN [46]. However, TCAs such as amitriptyline cannot achieve satisfactory effects for all people in the treatment of PHN pain [47,48]. Another reason for the infrequent use of TCAs is the adverse effects of TCAs. They may cause nausea, headache, constipation and other negative effects that patients are unwilling to bear [49,50]. At present, there are also experiments proving that the combination of amitriptyline and other analgesic drugs, such as pregabalin, may have a better effect [47,51]. Topical drug use did not change much in our study range, although many clinical trials have confirmed that local drug use has a certain therapeutic effect on PHN and has fewer adverse effects [52,53,54]. In the combination of drugs, we found that the frequency of mecobalamin, which does not belong to main treatment drugs, was high. Mecobalamin is a vitamin medicine and is the activated form of vitamin B12. A few studies shows that it not only has a good therapeutic effect on PHN, but can also relieve peripheral polynomialism, entrapment neuropathy and glossopharyngeal neuropathy [55,56]. A study showed that in four trials including 383 participants, the scores of the pain numerical scale in the vitamin B12 group decreased faster, compared with the placebo group. Vitamin B12 can improve the quality of life of patients with PHN and significantly reduce the number of patients using analgesics [57]. The combined use of mecobalamin seems to be justified and reasonable, but more relevant studies are also needed in the future to confirm the safety of its use and its impact on patients. There are also several limitations to our study. First, the severity of PHN and clinical outcome were not measured and matched with the prescription. If the patient’s pain degree is included in future studies, hierarchical statistics on the drugs used could be better gathered. Second, the rationale of drug use was not assessed, due to the large number of prescriptions. Other comorbidities may cause some deflection among the statistical results. Although all patients with prescriptions are diagnosed with PHN, they also contain many drugs that might not be used to treat PHN. Finally, sampling bias may exist: although prescriptions were from many hospitals located in representative areas of China, primary care or non-hospital-based outpatient prescriptions are not included in our study. Therefore, in future research plans, the study of the correlation between the pain degree of patients and the corresponding drug selection should be included, and include more patient disease information, making it possible to better analyze the rationality of drug use. ## 5. Conclusions The status and trends of pharmacological treatment for outpatients with PHN in China during a five-year period were analyzed in this study, and the yearly prescriptions and corresponding costs were both found to have increased. Gabapentin and pregabalin were the most frequently used drugs for PHN, which is in accordance with current practice guidelines. Among them, the use and cost of pregabalin showed a significant increasing trend. Oxycodone, as an opioid drug with a strong analgesic effect, had the third most yearly prescriptions but took the largest share of cost, which raised concerns about the rationality of its use and economic burden for PHN patients. The discovery of mecobalamin as the most commonly used drug may be due to its beneficial effect on peripheral nerves, but more research is still needed to study its mechanism of action on PHN in future. The percentages of antidepressants and dermal drugs were relatively low, which reflected physicians’ behavior and patients’ preferences in China. The results of this study indicate that the relevant departments and prescribers should attach great importance to the use of drugs in the treatment of PHN, especially with regard to the use of addictive drugs. Our study may benefit the allocation of medical resources and management for PHN in China, as well as other countries.
# Caloric and Lipid Profiles during Pregnancy in a Socio-Culturally Diverse Society ## Abstract This research analyzes the determining factors in diet quality among the Spanish pregnant population with the aim of promoting healthier eating habits and preventing the development of non-communicable diseases. It is a diagnostic, non-experimental, cross-sectional, and observational study, with correlational descriptive methodology, and 306 participants. The information was collected using the 24 h dietary recall. Various sociodemographic factors that influence diet quality were analyzed. It was found that pregnant women consume too much protein and fat, score high in SFA consumption, and do not achieve the CH recommendations, consuming twice as much sugar. Carbohydrate intake is inversely related to income (β = −0.144, $p \leq 0.005$). Likewise, protein intake is linked to marital status (β = −0.114, $p \leq 0.005$) and religion (β = 0.110, $p \leq 0.005$). Finally, lipid intake appears conditional upon age (β = 0.109, $p \leq 0.005$). As regards the lipid profile, a positive association is only observed with age and MFA consumption (β = 0.161, $p \leq 0.01$). On the other hand, simple sugars are positively related to education (β = 0.106, $p \leq 0.005$). The results of this research show that the diet quality of pregnant women does not meet the nutritional recommendations established for the Spanish population. ## 1. Introduction At present, diet is considered one of the factors of greatest influence on well-being, health, and life quality, thus showing a direct action on the morbidity and mortality of a determined population. Hence, a healthy diet is one of the most important aspects for improving health [1]. Noncommunicable diseases (NCD) are the main cause of death and disability in women worldwide, including women at a reproductive age [2]. The Sustainable Development Agency (SDA) includes specific targets on maternal health and NCD, such as a reduction in the global maternal mortality rate to 70 deaths for every 100,000 live births, and a reduction by one-third in premature mortality due to NCD [3]. NCD, including cardiovascular diseases (CVD), cancer, chronic respiratory diseases, and neurodegenerative diseases, are the main cause of morbidity and mortality in the world. Among the main risk factors, we can highlight those aspects relating to lifestyle, such as an unbalanced diet, obesity, physical inactivity, emotional state, and quality of life as well as smoking and alcohol consumption [4,5,6]. Nevertheless, excess weight and obesity continue to rise each year due to multiple factors, including poor eating habits [7,8,9], characterized by high levels of processed meats and fats, saturated fats, refined grains, salt and sugars, and a lack of fresh foods, fruit, and vegetables [10]. At the end of the 19th century, the Spanish diet gradually covered nutrient and energy requirements, being more costly in minors, adult women, and pregnant women. At the end of the 20th century and the beginning of the 21st, as has occurred in other countries, energy intake has increased in an excessive and unbalanced manner causing deficiencies in the main micronutrients [11]. The gestation period is a critical time for establishing the risks of chronic diseases in offspring [12]. Nutrition plays a key role during this period of development and, since it is a determining factor of risk throughout life, it is a modifiable risk factor. Although the World Health Organization (OMS) provides guidelines for prenatal care [13], there is a lack of comprehensive guidelines detailing the nutritional requirements of women during reproduction, from preconception through pregnancy and breastfeeding [14]. Pregnancy is a vulnerable stage where the evidence on maternal nutrition shows the importance it exercises on the mother’s health and on healthy fetal growth and development [15]. An inadequate nutritional intake during this stage of life may have negative consequences for health in the short and long term, both for the mother [16] and for the child [17] such as premature births or miscarriages [18], hypertensive disorders [19], obesity or diabetes in childhood [20,21,22], alterations in fetal growth [23], and susceptibility to allergies and bacterial infections [24] among others. Nevertheless, a healthy diet before and during pregnancy is associated with a lower risk of all these diseases [25,26,27,28]. Some studies indicate that Spanish women do not meet the dietary recommendations of scientific societies [29,30]. This failure is related to pregnant women’s socioeconomic level, culture, age, and tobacco and alcohol consumption [31,32,33], among different factors. Therefore, culture is a factor that influences dietary habits. The fact is that the frequency of daily food intake or the quantity or the type of food consumed will depend more on culture than the availability of the food in itself [34,35]. The city of Melilla, where this study is performed, is a border city with Morocco, which contributes to the city’s multiculturality. This proximity means that many Moroccans cross the border to seek healthcare [36]. Melilla is, therefore, an optimum scenario for studying the social and cultural differences in relation to eating habits during pregnancy. Hence, a general objective of this study is to analyze dietary quality in pregnant women in the multicultural city of Melilla, as well as the factors that may influence it with the aim of promoting healthy eating habits at this stage. ## 2.1. Study Design and Participants It is a diagnostic, non-experimental, cross-sectional, and observational study, with correlational descriptive research methodology. The sample was selected by convenience probability sampling from the populational data on pregnant women collected in the public health system of the city of Melilla in the last 18 years. The sample is formed of 306 pregnant women, with an average age of 29.92 (5.51) years old, with the minimum age being 18 and a maximum age of 43; specifically, 196 (64.1) were born in the city of Melilla. The characteristics of the sample such as residence, place of birth, number of children, marital status, education, and income are shown in Table 1. ## 2.2. Instruments and Procedure The 24 h dietary recall by Rodríguez et al. [ 37] was used to determine diet quality. It is a questionnaire completed by the participants recording the number of grams of food ingested during the previous day (breakfast, lunch, afternoon snack, and food intake between meals) after providing them with the appropriate explanation to estimate said amounts in order to increase result reliability [38]. Tables were used with images of representations of food and drinks with various sizes and grams to facilitate data collection. This questionnaire not only obtains in detail the quality and quantity of the food and drink (grams), but also details the culinary process and places emphasis on the quantity and quality of bread, oil, and sugar. The participants completed the questionnaires in-person after written informed consent was provided. These data were gathered between March and December 2021. ## 2.3. Statistical Analysis The data obtained were analyzed with the statistical program SPSS in its version 26.0 (International Business Machines Corporation (IBM), Armonk, NY, USA). The basic statistics were used, according to the nature of the variables, for the descriptive analysis. Thus, for the quantitative variables the measures of central tendency (mean, median, mode), dispersion (typical deviation), and position (distribution limits) measurements were used whilst absolute and relative frequencies (percentages) were used for the qualitative variables. Non-parametric tests were used for the interferential analysis according to the values presented by the Kolmogorov–Smirnov test. The chi-squared test was used for the comparison of proportions and $p \leq 0.05$ was considered a value of statistical significance. Likewise, the Mann–Whitney U test and the Kruskal–Wallis test were used to relate diet quality with sociocultural factors depending on the number of categories of independent variables. Three multiple regression models were performed with the independent variables dichotomized with the aim of verifying the degree of determination that the sociodemographic variables may have on the caloric, lipid, and simple sugar consumption profiles. An Ordinary Least Squares (OLC) analysis was performed to compare the dependent variables (caloric profile, lipid profile, and simple sugar consumption) with the rest of the study variables; standardized and non-standardized regression coefficients were also obtained (β). The data provided by the 24 h dietary recall in the form of food were transformed into energy intake, consumption of macronutrients (carbohydrates, lipids, and proteins), micronutrients (vitamins and minerals), and plant fiber using the IENVA dietary calculator (https://calcdieta.ienva.org/tu_menu.php, accessed on 3 January 2023) with the advice of a nutritionist. The grams of daily consumption of each immediate principle were compiled, as well as the sugars and types of fatty acids, it was later multiplied by the kcal that each one of them provides, and to calculate the percentage of the total caloric value (TCV), it was multiplied by 100 and divided by the total kcal ingested. ## 2.4. Ethical Considerations This research was governed by the ethical principles of the Declaration of Helsinki. The participants were informed of the study characteristics, as well as its objectives, and agreed to take part voluntarily. Formal consent was requested by signing the informed consent. ## 3. Results With respect to the intake of macro and micronutrients, the nutritional requirements varied for pregnant women depending on whether they were in the first or second half of pregnancy. For this reason, the data referring to nutrient intake are shown distributed in line with this nutritional parameter. Table 2 shows the median and interquartile range values related to the intake of energy, macronutrients, fiber, and cholesterol. The caloric and lipid profiles are found in Table 3. With respect to the caloric profile for the total sample, proteins provide $16.29\%$ of the energy consumed, $45.46\%$ corresponds to carbohydrates ($18.6\%$ in the form of simple sugar), and the remaining $38.36\%$ is provided by lipids. In terms of the lipid profile, the energy intake of saturated fats (SFA) for the total sample is $11.53\%$, monounsaturated fats (MFA) provide $18.26\%$, and polyunsaturated fats (PFA) $4.93\%$. There are significant differences between the two gestation periods for lipids ($$p \leq 0.006$$) in the case of the caloric profile and for MFA ($$p \leq 0.010$$) and PFA ($$p \leq 0.018$$). Table 4 shows how the caloric profile is distributed within the limits established according to the nutritional targets established for the Spanish population [39]. In this regard, it is observed how more than half of the participants in both periods have a higher protein consumption (for $54.5\%$ and $57.8\%$, in the first half of pregnancy and in the second, respectively, the proteins make up a TCV contribution greater than $15\%$). Likewise, for $71\%$ and $65.8\%$, the fats were greater than $35\%$ of the VCT. With respect to carbohydrates, only $16.6\%$ and $21.1\%$ of the pregnant women corresponding to the first and second gestation period, respectively, follow the recommendations (between 50 and $55\%$). In relation to the aggregate sugars, it should be highlighted that $87.6\%$ and $91.9\%$ of pregnant women belonging to the first or second gestation period consume more than $10\%$ of the recommended sugars. As regards the lipid profile, over half of the participants, belonging to both gestation groups, tend to consume more than $10\%$ of SFA (Table 5). Table 6 sets down the consumption of micronutrients (minerals and vitamins) in both gestation periods. It should be stressed that the mineral intake does not meet the recommendations with respect to calcium, iron, zinc, magnesium, and potassium. On the other hand, as regards the vitamins, the intake is not met for FA and vitamins A, D, and E. The results of the regression models performed considering the caloric profile as dependent variables and the sociodemographic variables as independent variables are shown in Table 7. Carbohydrate intake is inversely related to income (β = −0.144, $p \leq 0.005$), so with lower incomes there would be greater carbohydrate consumption. Likewise, protein intake is linked to marital status (β = −0.114, $p \leq 0.005$) and religion (β = 0.110, $p \leq 0.005$) since pregnant women with a partner and those who are Muslim consume the least amount of protein. Finally, lipid intake appears conditional upon age (β = 0.109, $p \leq 0.005$) with the oldest women consuming the largest amount of this macronutrient. Table 8 shows the results of the regression performed on the lipid profile as dependent variables, and the sociodemographic variables as independent variables. As regards the lipid profile, a positive association is only observed with age and MFA consumption (β = 0.161, $p \leq 0.01$); i.e., the youngest pregnant women consumed the least MFA. On the other hand, simple sugars are inversely related to education (β = 0.106, $p \leq 0.005$). The women with the lowest education are those who consume the highest amount of simple sugars. ## 4. Discussion An imbalanced caloric and lipid profile is observed in this research. Over half of the participants show an intake exceeding the protein recommendations, which also occurs in other studies [41,42]. Carbohydrate intake was below the recommendations and the total fat intake was exceeded according to the dietary references, coinciding with other studies [29,43,44,45]. Nevertheless, although these studies did not differentiate between the types of fats consumed in the diet, it is known that ϖ-3 fatty acids during pregnancy improve infant cognitive development [46] and prevent allergic diseases [47]. However, the fact that the total fat intake exceeds the recommendations may contribute to an unhealthy increase in maternal weight. This is associated with a higher risk of preeclampsia, gestational Diabetes Mellitus, macrosomia, congenital anomalies, and newborns with low birth weight and maternal mortality [48]. The group studied ingests an average of 1891.35 ± 529.18 kcal/day, practically coinciding with Izquierdo Guerrero [40], which introduced an average of 1984.75 ± 579.84 kcal/day. The average energy intake in pregnant women during the first gestational period is 1767 Kcal/day and of 1898 Kcal/day in the second. Therefore, it can be observed that pregnant women do not reach the energy recommendations [39]. Around $90\%$ of pregnant women eat more sugar than they should. The high consumption of commercial juices, pastries, sweets, and ice cream is responsible for this; our data are similar to the ANIBES study [49]. A higher consumption of sugars and fatty acids in pregnancy is associated with high adiposity in the offspring [50,51,52]. With respect to sugars, the WHO [53] recommends decreasing consumption below $10\%$ of the total energy intake, since it causes an increase in weight and tooth decay. A consumption below $5\%$ would give rise to health benefits. Furthermore, the participants’ lipid profile has a high energy intake through SFA, as occurs in other studies with similar groups [41,54,55]. MFA and PFA, however, do not reach the recommended consumption percentage. Likewise, the study by Ortega Anta et al. [ 56] highlights the low PFA consumption, recommending the increase in consumption of fish and/or food enriched with PFA to achieve health benefits. According to the FEN (Spanish Nutritional Foundation) [57] women of fertile age should take care of their diet, not excluding any essential nutrients so that when they become pregnant, they do not undergo additional nutritional risks. AGP-3 should be provided daily in the diet through fish or nuts, among other foods containing contain them. An intake of 22–25 g of fiber per day is recommended in women. Unfortunately, in Europe, these recommendations are not reached and very few countries offer guidance on the sources of food that contain fiber to achieve a suitable intake [58]. There are countries, such as the Scandinavian countries, which recommend a higher intake of wholegrains, approximately 75 g per day [59]. Fiber consumption in this study is considerably less than the recommendations, 12–15 g per day, far from the reference data. Furthermore, Carbajal-Azcona [60] adds the intake of around 35 g/day of fiber and moderating sugar intake in his general nutritional advice for pregnant women. A meta-analysis of the dietary data obtained in several developed countries informed that pregnant women have difficulties in following the national dietary guidelines for macro and micronutrients [61]. The same occurs in the research presented here: they do not meet the intake recommendations of the minerals, calcium, iron, magnesium, zinc, and K and vitamins A, D, and E, and FA, coinciding with Kocyłowski et al. [ 62], and with Rodríguez-Bernal et al. [ 29] where an insufficient intake of FA, iron, and vitamin E was shown. Likewise, the results shown coincide with other studies where a lack of folates, vitamin D, calcium, iron, iodine, zinc, and vitamins of the group [63,64] is verified. Adequate iron intake during pregnancy may reduce the risk of anemia, newborns with low weight, and premature births [65,66]. It is also important to maintain an adequate calcium intake since it helps reduce the risk of preeclampsia [67]. The Expect I study, based on calcium intake in the diet and the use of supplements in pregnant women in the Netherlands, concluded that $42\%$ of pregnant women had an inadequate calcium intake and that supplements are frequently used, but the majority do not contain sufficient quantities to remedy this inadequate intake [68]. Likewise, calcium consumption in this study is fairly insufficient, lacking around 400–500 mg daily. FIGO highlights the importance of a healthy and varied diet, with supplements or fortified foods when necessary, it promotes the adoption of healthy eating habits before pregnancy, and it recognizes and provides adequate intervention for micronutrient deficiencies [69]. Various studies performed with Spanish pregnant women have demonstrated that the diet followed by pregnant women is not totally adequate. Thus, Izquierdo-Guerrero [41], in his research with pregnant women from Madrid, finds a high protein and fat consumption, especially saturated fat, and a deficiency in micronutrients, which do not meet the recommended intakes (IR). On the other hand, Jardí et al. [ 32] determined that the consumption of red and processed meat and cakes and pastries exceeded recommendations whilst the consumption of healthy food decreased from the first trimester until after the postpartum period. Two studies performed by Ruiz [70] and Izquierdo-Guerrero [41] asserted that older women have a greater consumption of sugars, lipids, and MFA; similar data to that obtained in our study. Likewise, Izquierdo-Guerrero [41] did not find significant associations with income and CH consumption, unlike our study, which found that women with the lowest income consumed greater CH. On the other hand, according to Mohatar-Barba [71], Muslims usually eat less protein, which is similar to our results. The scarcity of studies that link sociocultural factors and diet quality in pregnant women is noteworthy. For this reason, we can consider that this is a novel study, as it is one of the first that links these variables. Among the study’s limitations we should highlight its cross-sectional design since it does not allow cause–effect relationships to be established among the variables. This study considers the eating habits of pregnant women, but it does not collect information about diet quality before conception and during the postpartum period, something that is considered important for correct monitoring. It should also be mentioned that the only factor not included in this study is tobacco consumption, which is usually considered when assessing lifestyles. As an incidental non-probability sampling is performed, it does not cover a significant representation of all the cultures found in Melilla. The sample quality can be improved by increasing the number of participants and choosing representatives from the different cities of Spain. On the other hand, the incorporation of nutrition education content into the educational system should be addressed, with the aim of fighting against inadequate eating habits that may cause long-term health problems. Finally, it should be mentioned that there was difficulty in finding studies on pregnant women that link diet quality with other sociodemographic factors. With a view to the future, it is recommended that the first nutritional interventions are implemented in the preconception period, since this influences the state of the mother’s health, in addition to having an influence on the results of the pregnancy [30,72,73]. Performing an educational nutritional intervention as a pilot project should also be considered to be able to assess its subsequent implementation and improve the proposal’s design. Finally, another important aspect to consider is performing a stratified probability sampling. This would achieve a more representative sampling of Melilla’s population. ## 5. Conclusions The results obtained in this study reveal that, in general, the caloric and lipid profiles of pregnant women in the city of Melilla do not meet the healthy recommendations established for the Spanish population. They consume too many proteins and fats, score high on SFA, and do not achieve CH recommendations as their diet contains twice the amount of sugar recommended. Likewise, they do not meet the recommendations for the intake of calcium, iron, magnesium, zinc, potassium, FA, and vitamins A, D, and E. Furthermore, there are certain factors that may influence said intakes, such as religion, age, income, and marital status. Nevertheless, residency showed no association. Regarding religion and marital status, Muslims and pregnant women with partners show a lower protein consumption. On the other hand, women with lower income consumed greater CH, and those with the lowest level of education consumed more simple sugars. Finally, older women consumed more lipids, specifically MFA.
# Optimizing Levilactobacillus brevis NPS-QW 145 Fermentation for Gamma-Aminobutyric Acid (GABA) Production in Soybean Sprout Yogurt-like Product ## Abstract Gamma-aminobutyric acid (GABA) is a non-protein amino acid with various physiological functions. Levilactobacillus brevis NPS-QW 145 strains active in GABA catabolism and anabolism can be used as a microbial platform for GABA production. Soybean sprouts can be treated as a fermentation substrate for making functional products. This study demonstrated the benefits of using soybean sprouts as a medium to produce GABA by *Levilactobacillus brevis* NPS-QW 145 when monosodium glutamate (MSG) is the substrate. Based on this method, a GABA yield of up to 2.302 g L−1 was obtained with a soybean germination time of one day and fermentation of 48 h with bacteria using 10 g L−1 glucose according to the response surface methodology. Research revealed a powerful technique for producing GABA by fermentation with *Levilactobacillus brevis* NPS-QW 145 in foods and is expected to be widely used as a nutritional supplement for consumers. ## 1. Introduction Gamma-aminobutyric acid (GABA), a four-carbon non-protein and water-soluble amino acid, is the main inhibitory neurotransmitter of the central nervous system [1,2,3,4,5]. It can have beneficial effects on human health and other animals by reducing blood pressure, preventing chronic alcoholic diseases, inhibiting cancer cell proliferation, improving brain function, and promoting insulin [6,7,8]. GABA also demonstrates the potential for lowing blood pressure in spontaneously hypertensive rats (SHR) and hypertensive humans [9,10]. Furthermore, a previous study reported the key role of GABA production in hepatocytes in the dysregulation of glucose regulation and eating behavior associated with obesity [11,12,13]. There has been an increased demand for GABA due to its widespread use in various industries [14]. Concentration of GABA in plant tissues varies between 0.03 and 2.00 μmol g−1, increasing with hypoxia, hydraulic pressure, salt stress, temperature shock, germination, and other biotic stresses [4]. Several microorganisms, including lactic acid bacteria (LAB), such as Levilactobacillus brevis, Lacticaseibacillus paracasei, and Enterococcus raffinosus, have recently been intensively investigated and used in GABA synthesis [15], because they are rich in glutamate decarboxylase and can synthesize GABA. Plant seed germination is a physiological process that stimulates endogenous enzyme activity and alters biochemical processes [8,16]. According to recent research, soybean sprouts can be utilized as an alternate method to strengthen the nutritional quality of phytochemical content, particularly GABA [7]. Germination of soybean for human consumption would reduce the content of anti-nutritional elements while increasing the number of minerals and phytochemicals such as vitamin E and isoflavone aglycone derivatives [17,18]. In particular, during soybean germination, various free amino acids are produced with protein degradation, providing a natural substrate for GABA synthesis [17]. This study aims to use response surface optimization to investigate the effect of soybean germination treatment and lactic acid bacteria fermentation on the level of GABA in soy milk. The study’s results will provide a favorable theoretical basis for producing products with higher nutritional value. ## 2.1. Materials and Strain Organic soybeans were purchased from a local supplier. Analytical grade chemical reagents utilized in this work were purchased from Sigma-Aldrich Corp., St. Louis, Missouri, USA. Levilactobacillus brevis NPS-QW 145 was obtained from BD Company (Franklin Lakes, NJ, USA). Six carbon sources, including glucose, lactose, mannose, malactose, amylopectin, and fructose, were purchased from Sigma-Aldrich Corp., St. Louis, MO, USA. Difeo TM lactobacilli MRS broth and Monosodium glutamate (MSG) were purchased from Difco. ( Sparks, MD, USA). All other reagents were of analytical grade. ## 2.2. Preparation of Soybean Sprouts Milk Germination conditions used in this study were based on Luo’s method [4]. Typically, 200 g of soybeans were selected, washed, and soaked in a $95\%$ ethanol solution for 1 min to remove microorganisms on the surface of soybean seeds. The beans were washed with sterile water and placed in an incubator for germination. Subsequently, the germination status of the beans was observed daily, with the germination length measured as well. Consequently, the bean sprouts were taken out from the incubator on days 0, 1, 3, 4, and 5 to prepare soybean sprout milk. The sprouts were rinsed with clean water and mixed with water in a ratio of 1:2 (soybean sprout: water) before putting the mixture into a grinder for 5 min of pulp grinding. The mixture was then allowed to be filtered, homogenized, and sterilized at 90 °C in a water bath pot before 1 h of boiling. The sterilized mixture was left at room temperature for cooling before fermentation. ## 2.3. Preparation of Fermented Yogurt-like Product The fermentation method was conducted following the instructions of Xiao and Shah [19] with slight modifications. Firstly, the soybean sprout yogurt-like product made with sprouts of different germination times was autoclaved and then inoculated with $3\%$ Lb. brevis 145 (v/v), 5 g L−1 MSG, and six different monosaccharides (glucose, lactose, mannose, galactose, amylopectin, and fructose) at different concentrations (0, 5, 10, 15, and 20 g L−1) and mixed well. Subsequently, the mixture was fermented in the incubator at 37 °C to observe the coagulation state and compare the GABA concentration in it. Soybean without germination treatment was used as the blank test. Yogurt prepared from the same quality of milk powder was also fermented with $3\%$ Lb. brevis 145 (v/v), 5 g L−1 MSG, and 10 g L−1 glucose, and then fermented at 37 °C for 48 h. Moreover, the GABA content in the soybean with germination treatment was compared to explore the effect of germination treatment on the GABA content in the soybean. ## 2.4.1. Single-Factor Experiments Levilactobacillus brevis NPS-QW 145 was used as the fermentation strain in a single-factor experiment. The following factors were examined for their influence on the GABA content of fermented soybean sprouts: types of carbon sources (glucose, lactose, mannose, galactose, amylopectin, and fructose), germination time (0, 1, 3, 4, and 5 d), glucose concentration (0, 5, 10, 15, and 20 g L−1), and fermentation time (12, 24, 48, 72, and 96 h). The GABA concentration was determined using RP-HPLC (Shimadzu model LC-2010A, Shimadzu Corp., Kyoto, Japan). ## 2.4.2. Response Surface Methodology (RSM) RSM is typically used to investigate optimal experimental conditions since it is a reliable and useful statistical methodology. This experimental method was partially modified according to the method of Zhang et al. [ 14]. Based on the results of the single-factor experiments, glucose concentration, fermentation time, and germination days were selected for the RSM experiment based on a Box-Behnken center combination design (DTD), and the GABA level was treated as the response values. Table 1 shows the three factors and the three levels of the research design. ## 2.5.1. Protein and Peptide Removal from Soybean Sprout Yogurt-like Product GABA levels were determined according to the Wu and Shah’s method [2]. Reversed-Phase HPLC (RP-HPLC, Shimadzu model LC-2010A, Shimadzu Corp., Kyoto, Japan) was employed to detect the GABA concentration in the fermented soybean sprout yogurt-like product. First, to remove the protein of the soybean sprout milk, a 1 mL aliquot of fermented soymilk samples was diluted five times with sterile H2O, and 250 μL of zinc acetate and ferrous cyanide were added and mixed thoroughly. After standing for 1 h, samples were centrifuged by 5000 g at 25 °C to completely precipitate proteins and peptides. A GABA analysis was performed after samples were stored at 4 °C. ## 2.5.2. Amino acid Derivatization The obtained supernatant containing GABA was derived. 200 μL of supernatant was mixed with 200 μL of acetonitrile, 200 μL of NaHCO3 (pH 9.8), 200 μL of H2O, and 100 μL 40 g L−1 of Dansyl chloride was added at 60 °C in the dark for 1 h. After derivation, 100 μL of 20 μL mL−1 acetic acids was added to stop the reaction. Subsequently, the sample was centrifuged at 12,000 g at 25 °C for 5 min. Moreover, the supernatant passed through a 0.22 μM filter with a membrane and was stored in a brown vial. Subsequently, the GABA concentration of the derived sample was analyzed using RP-HPLC, as previously used [2]. The retention time for GABA is shown below at 20 min. Moreover, the standard curve of GABA present in Figure 1 was prepared with 0.01, 0.05, 0.07, 0.1, 0.25, 0.5, 0.7, and 1.75 g L−1 concentrations of GABA standard solution. It can be seen that the peak area was highly correlated with the GABA concentration, R2 = 0.9992, and the relationship between them satisfied the regression equation $y = 4$ × 106 x + 70,120. According to different experiments, the different integral areas obtained by the samples could be substituted into the formula GABA concentration. RP-HPLC was used to separate and quantify dansyl GABA and dansyl glutamic acid using a Kromasil 5-μm 100A C18 column (250 mm × 4.6 mm; Phenomenex, Torrance, CA, USA). ## 2.6. Determination of pH and Viable Cell Counts in Fermented Soybean Sprout Yogurt-like Product This method was a combination of Chan and Wu’s research respectively [20,21]. The pH values of the fermented soybean sprout yogurt were measured using a pH meter (250 A Orion Portable pH Meter, US). To measure the viable cell number, 1 mL of the fermented soybean sprout yogurt-like product was dissolved in 9 mL of sterilized normal saline. Subsequently, 1 mL of the uniform solution was taken into Difeo TM lactobacilli MRS broth and incubated at 37 °C for 48 h. The average number of colonies in the Petri dish was multiplied and calculated by the dilution multiple. Generally, 30–300 CFU were chosen to count. The unit of colony numbers was CFU mL−1. ## 2.7. Protein Content of Fermented Soybean Sprout Yogurt-like Product The fermented soybean sprout yogurt-like product was compared to those made from commercial soybean powder and milk powder. The protein content was determined using the MicroKjeldahl method. A sample of 0.3 g was accurately weighed and then placed in a Kjeldahl tube with a catalyst tablet and 10 mL of concentrated sulfuric acid. The weighed sample was nitrified in a nitrification furnace at 370 °C for 50 min until the solution turned light green. Next, 40 mL of distilled water was added to the nitrated sample for cooling and then put into the MicroKjeldahl nitrogen determinator for automatic titration. Finally, manual titration was conducted using a 250 mL conical flask with 40 mL of $4\%$ boric acid solution and five drops of the indicator. Three parallel tests were performed for each group of samples. The sample nitrogen content was calculated using the following formula and then converted to crude protein content:[1]%N=((1.4×V)÷1000)⁄g)×100 V = volume (mL) of 0.1 N HCl used in the titration. The value of 1.4 is derived from the fact that 1.0 mL of 0.1 M NH4OH contains 1.4 mg nitrogen. ## 2.8. Texture Analysis To compare the effects of different soybean germination days, fermentation time and carbon source additions on the texture of the yogurt-like products, a texture analysis was performed on samples with different preparation conditions. The method for textural characterization of the fermented bean sprout yogurt-like product was modified according to Giri’s method [22]. Before measurement, products with different preparation processes were stored at 4 °C for 12 h, restored to room temperature, and about 9 g of samples were weighed. The texture analysis was performed using a Texture Analyzer TAXT2i (Stable Micro Systems, Godalming, Surrey, UK) equipped with a 25 kg load cell and calibrated with a standard dead weight of 5 kg before use. A HDP/SR-TTC probe was unitized for determination. A Texture Expert version 1.20 (Stable Micro Systems) software application measured firmness, stickiness, work of shear, and work of adhesion. The same sample was weighed three times, and the mean value was obtained and recorded. The specific measurement parameters were test speed: 3.0 mm s−1, measured speed: 10 mm s−1, test distance: 23 mm, trigger force: g, and data acquisition rate: 200 PPS. ## 2.9. Sensory Analysis This method was slightly modified from Meilgaard’s approach [23]. Briefly, 50 trained panelists were invited to evaluate the appearance, odor, acidity, thickness, fluidity, taste, and overall acceptance of the fermented yogurt-like product using 9-point scores (from 1 to 9). The ratings were presented on a 9-point hedonic scale ranging from 9 (“extremely like”) to 1 (“extremely dislike”). ## 2.10. Statistical Analysis The data figure was created using the program, Microsoft Excel 2010, and IBM SPSS 25 Statistic was used to analyze the significant differences ($p \leq 0.05$ showed that the difference in the analysis results was significant, and $p \leq 0.01$ indicated that the difference in the analysis results was very significant). The response surface was designed, optimized, and analyzed using Design Export 10.0.7. ## 3.1. The Effect of Various Conditions on GABA Production by Lb. brevis 145 in Soybean Sprout Yogurt-like Product Legumes primarily metabolize GABA through a short pathway known as a GABA shunt, converting glutamate into succinic acid. This pathway synthesizes GABA from glutamate-by-glutamate decarboxylase (GAD, EC 4.1.1.15). GABA is converted to succinic semialdehyde (SSA) using GABA aminotransferase (GABA-T, EC 2.6.1.19). Then the last step of the shunt pathway is to convert SSA to succinic acid using succinic semialdehyde dehydrogenase (SSADH, EC 1.2.2.16) [24,25]. In the present study, after soybean seed germination, protein was transformed into glutamate and polyamine, which provided a sufficient precursor substance for GABA enrichment. During soybean germination, the content of soluble sugar decreased, and the content of dry matter decreased with the extension of germination time. However, the content of reduced sugar, soluble protein, free amino acid, and GABA increased. Figure 2A shows that a significant increase ($p \leq 0.05$) in GABA content was detected during soybean germination. GABA content in soybeans initially increased and then decreased with increasing germination time. When the germination time was one day, the GABA content was the most extensive (0.025 g L−1). Compared to raw soybeans, the GABA content increased continuously and significantly by 1.61-fold by the end of germination at one day. This outcome is consistent with the findings of Vann’s study [8], which found that soybean germination significantly increased the GABA content in soybean sprouts. Meanwhile, a previous study [4] reported that the GABA content in germinated soybeans peaked on day 5, which conflicted with the present study. As mentioned above, increased GAD activity could be responsible for higher GABA content. The most suitable explanation for this differential phenomenon is that the GAD activity during germination was also influenced by germination temperature and germination approaches [26,27]. Conversely, Figure 2B shows that GABA-producing bacteria show different preferences for sugars, affecting their growth and GABA production. Compared to lactose, mannose, galactose, amylopectin, and fructose, glucose were significantly different in terms of improving GABA levels ($p \leq 0.05$). This result is consistent with the findings of a previous report by Xiao and Shah [19], which suggested that after fermentation for 24 h, glucose was the main carbon source consumed by Lb. brevis 145. Furthermore, with increasing fermentation time, the GABA content in the fermentation broth increased initially and then decreased (Figure 2C). The content increased sharply in the first 48 h. After 48 h of continuous culture, the GABA content in the fermentation broth decreased significantly. At 48 h of fermentation, the content of GABA reached its maximum, which was 1.867 g L−1. A possible reason for this was the consumption of MSG and nutrients in the fermentation broth with the extension of fermentation time, and the subsequent cell senescence with decreased GABA content. An investigation of the effect of different glucose concentrations on GABA production in the soybean sprout yogurt-like product was also performed. Glucose, as the main carbon source of microorganisms, has the energy required for the life activities of bacteria, and constitutes the material basis of bacterial cells and their metabolites [28]. Figure 2D shows that with increasing glucose addition, the GABA content in the fermented bean sprout yogurt-like product also increased initially and then decreased. Briefly, when the glucose addition was 10 g L−1, the maximum GABA content was 2.21 g L−1. However, when the glucose addition continued to increase, the GABA content in the bean sprout yogurt-like product demonstrated an obvious decreasing trend. It could be that when the sugar content in the fermentation medium was too high, the cell metabolic activity produced organic acids, resulting in decreased pH and cell aging. Moreover, when the sugar content in the fermentation medium was in a low range, the bacteria were less affected by changes in sugar metabolites [28]. ## 3.2.1. RSM Results The carbon concentration, fermentation times, and germination days were chosen as the key variables and the focal points for the response surface analysis in order to simulate the fermentation process based on the single-variable optimization (Table 2). Based on the Box-Behnken experimental design results, a quadratic multiple regression fitting was conducted, and a multiple quadratic response surface regression model was established. The obtained quadratic regression equation was as follows:$Y = 2.35$ + 5.9375 × 10−3 × A + 4.1875 × 10−3 × B + 5.125 × 10−3 × C−1.75 × 10−3 × AB + 3.75 × 10−4 × AC + 3.75 × 10−4 × BC−0.060135 × A2−0.046135 × B2−0.04801 × C2[2] where Y represents the GABA concentration, A represents the germination days, B represents the fermentation time, and C represents the glucose concentration. Table 3 illustrates the results of ANOVA. The regression model F test presented high significance ($p \leq 0.01$), and the R-squared was $95.75\%$, indicating that the model could explain the change in the $95.75\%$ response value. The lack of fit was 0.676 (more than 0.05), which was non-significant. The model had a high degree of fit with the data and a small experimental error. This model and equation could be employed to analyze and predict the amount of GABA extraction. ## 3.2.2. RSM Analysis of the Best-Fermented Parameters Following the linear regression equation fitted by the RSM, the response surface graph and contour of the model were drawn. The response surface contour map directly reflected the influence of various factors on the response value to find out the best process parameters and the interaction between various parameters. The center point of the smallest ellipse in the contour was the highest point of response surface; the contour map shape reflected the intensity and significance of interaction between the two factors. The contour lines in Figure 3 were oval, corroborating the finding the interaction between fermentation time and the addition of sugar concentration to germination time was significant. Figure 3 presents the three-dimensional spatial surface diagram of the interaction of two factor independent variables on GABA concentration created by Design Expert 10.0.7 software. The 3D response surface diagrams show that germination days, glucose concentration, and fermentation time had a good interaction, and that their effects were all statistically significant. By analyzing the linear regression equation, it was found that there was a maximum point in the experiment, which was also the maximum point in this study. Technological conditions producing this maximum point could be found through response surface analysis. Thus, the optimal technological conditions for the enrichment of GABA from fermented soybean sprout yogurt-like product were: soybean germination for 0.798 days, fermentation time for 45.490 h, and glucose concentration of 9.691 g L−1. Under these conditions, the predicted value of the GABA mass concentration was 2.287 g L−1. In order to verify the reliability of the regression equation, under the optimized conditions, soybeans germinated for 24 h, fermented for 48 h, and 10 g L−1 of glucose concentration was adopted; the GABA level obtained from the verification test was 2.302 g L−1, and the relative deviation was $0.67\%$ compared to the theoretical prediction value. Therefore, the optimal process conditions of the fermentation system obtained by the response surface optimization method were reliable. Furthermore, the fermented soybean sprout yogurt-like product obtained in the validation test had a uniform solidification state, a strong fermentation flavor, a pure flavor, and no peculiar smell. ## 3.3.1. GABA Concentration in GABA-Rich Yogurt Figure 4 shows that the soybeans were treated with Lb. brevis 145 after 48 h of fermentation after one day of germination. The GABA content reached a maximum of 2.302 g L−1, which was 1.56 and 3.5 times the GABA content in yogurt-like product fermented with soybean powder and milk powder respectively, implying that soybean germination and fermentation of lactic acid bacteria could significantly increase the GABA content in yogurt, thus producing a functional yogurt-like product rich in GABA. ## 3.3.2. pH and Cell Viability in Fermented Soybean Sprout Yogurt-like Product According to the Chinese national standard, GB 4789.35, for lactic acid bacteria content in viable products, the lactic acid bacteria content must be higher than 1 × 106 CFU mL−1. Figure 5 illustrates that the number of bacteria after 72 h of fermentation, still up to 8 × 106 CFU mL−1, already met the standard requirement for live bacteria plant yogurt. Moreover, as the fermentation time increases, acidity elevates due to the production of organic acids in the medium, resulting in a decrease in pH. After fermentation, the pH of the soybean sprout yogurt-like product also showed a downward trend, as illustrated in Figure 5. Finally, the pH of the fermented bean sprout yogurt-like product was maintained at about 4.4, which meets the Chinese national standard requirement (GB 5009.237) for the pH of fermented yogurt products (pH ≤ 4.5). ## 3.3.3. Texture Characteristic and Protein Content in GABA-Rich Yogurt-like Product Table 4 illustrates the texture characteristics and protein content of fermented yogurt-like product from soybean sprouts, fermented yogurt-like product from soy flour, and fermented yogurt from milk. The texture samples were obtained from samples with intact gel structures after 48 h of fermentation; their work of shear, stickiness, work of adhesion, and firmness was evaluated. Firmness, or the force required to achieve a certain deformation, is a regularly examined criterion when defining the texture of set-type cultured dairy products. It is the peak force height on the first compression cycle [29,30]. The firmness of fermented soybean sprout yogurt-like product was significantly ($p \leq 0.05$) higher than the other two samples. The increased firmness could be due to the high water binding capacity [31,32]. The quantity of energy required to perform the shear operation is known as the work of shear. It therefore evaluates the sample resistance throughout the penetration of the probe. In the current investigation, the work of shear of the fermented soybean sprout yogurt-like product and the fermented soybean flour yogurt-like product was significantly ($p \leq 0.05$) higher than that of the fermented milk yogurt. However, no significant ($p \leq 0.05$) difference was observed between the fermented soybean sprout yogurt-like product and the fermented soybean flour yogurt-like product. Stickiness is an essential sensory quality of semisolid food ingredients, defined as a sensation sensed by the tongue and palate [33,34]. Negative stickiness values represent stickiness, while positive values represent the product’s hardness. In the present investigation, no significant ($p \leq 0.05$) difference in stickiness was detected between the fermented soybean sprout yogurt-like product and the fermented soybean flour yogurt-like product, both of which were higher levels of stickiness than the fermented milk yogurt. To characterize the work of adhesion, the area under the negative peak in penetration was measured. It can also be defined as the work required to overcome the attraction force between the product surface and the probe surface [22]. During the current investigation, as Table 4 illustrates, there was no significant difference in the work of adhesion between the fermented soybean sprout yogurt-like product and the fermented soy flour yogurt-like product ($p \leq 0.05$), both of which were marginally lower compared to the fermented milk yogurt. Furthermore, according to the Chinese national standard requirement (GB 5009.5), the protein content in soybean products is not allowed to be less than $2.5\%$. Table 4 shows that the protein content of the three products reached the national standard. These results were consistent with the result of Niamah’s study [35]. Notably, the protein level of the fermented soybean sprout yogurt-like product exceeded the national standard by 1.7 times. ## 3.3.4. Sensory Evaluation of Yogurt-like Product Rich in GABA Table 5 shows the scores for the sensory characteristics of the fermented samples. GABA-rich fermented sprout yogurt-like product had a milky and full-bodied aroma. There were no significant differences ($p \leq 0.05$) in appearance, acidity, fluidity, thickness, or overall acceptance between the bean sprout yogurt-like product and the commercially available yogurt, validating the finding that fermented bean sprout yogurt-like product has prospective market acceptance and consumer acceptance. However, there was a significant difference ($p \leq 0.05$) in odor and taste between the bean sprout yogurt-like product and the commercially available yogurt. Future process optimization will focus on improving these two indicators. ## 4. Conclusions This study investigated the effect of lactic acid bacteria fermentation of germinated soybeans on the GABA content of yogurt. In soybeans, GABA content increases significantly during the germination and reaches its peak after one day of germination. The highest level of GABA production (2.302 g L−1) of the fermented soybean sprout yogurt-like product was obtained when Lb. brevis 145 was fermented with glucose 10 g L−1 as the sole carbon source for 48 h. The use of germinated soybeans had a significantly positive effect on GABA enrichment. Simultaneously, the fermented soybean sprout yogurt-like product with high GABA content met the requirements of Chinese national standards for yogurt in terms of acidity, protein content, and the number of live bacteria, and it had a better texture than the commercially available yogurt. This provides a prerequisite for producing innovative GABA-enriched yogurt.
# 3-Phenyllactic Acid and Polyphenols Are Substances Enhancing the Antibacterial Effect of Methylglyoxal in Manuka Honey ## Abstract Manuka honey is known for its unique antibacterial activity, which is due to methylglyoxal (MGO). After establishing a suitable assay for measuring the bacteriostatic effect in a liquid culture with a time dependent and continuous measurement of the optical density, we were able to show that honey differs in its growth retardingeffect on *Bacillus subtilis* despite the same content of MGO, indicating the presence of potentially synergistic compounds. In model studies using artificial honey with varying amounts of MGO and 3-phenyllactic acid (3-PLA), it was shown that 3-PLA in concentrations above 500 mg/kg enhances the bacteriostatic effect of the model honeys containing 250 mg/kg MGO or more. It has been shown that the effect correlates with the contents of 3-PLA and polyphenols in commercial manuka honey samples. Additionally, yet unknown substances further enhance the antibacterial effect of MGO in manuka honey. The results contribute to the understanding of the antibacterial effect of MGO in honey. ## 1. Introduction Honey is a food item known for its antibacterial activity, which is due to physical factors such as high osmolarity and low pH value as well defined antibacterial compounds such as hydrogen peroxide, produced by glucose oxidase, and antibacterial peptides from the honeybee such as “bee-defensin” [1,2,3]. Compared with “conventional” honeys, such as linden or rapeseed honey, the pronounced antibacterial activity of manuka honey (Leptospermum scoparium) is due to high concentrations of methylglyoxal (MGO), which is formed by dehydration of dihydroxyacetone in the nectar during maturation [4,5]. While MGO is present in conventional honeys in amounts up to 24.1 mg/kg [6], the MGO levels in manuka honey reach up to 1541 mg/kg, depending on the location of the bee hive [7,8,9]. On the other hand, manuka honey is classified as a “low-peroxide” honey due to its low glucose oxidase activity [10]. Besides the use as a food item, honey and, especially manuka honey, is used as a non-adherent dressing in medical wound management, in which the mentioned properties help to stimulate tissue regeneration and to keep the wound sterile [11,12]. A standard method for the determination of the antibacterial activity of (manuka) honey is the agar diffusion assay. Thereby, honey solution is placed into a cavity of an agar plate and inoculated with a culture of Staphylococcus aureus. After 24 h of incubation, the antibacterial effect of the honey solution can be estimated based on the inhibition zones around the samples and by comparing the effect caused by the honey samples with standard solutions of an antibacterial agent, e.g., phenol [13,14]. This technique has the drawback that the agar plates cannot be evaluated precisely and that no statements about bacteriostatic or bactericidal effects are possible. For the classification of the antibacterial activity of manuka honey, Swift et al. [ 2014] established a growth assay in liquid cultures, in which it has been shown that MGO has a bacteriostatic effect on S. aureus [15]. Besides that, a quantitative evaluation of the antibacterial effect of MGO in honey on specific bacteria can be found rarely in the literature, as well as with respect to the variety of bioactive substances in honey besides MGO, such as hydrogen peroxide [1]. Additionally, it has already been shown that MGO and manuka honey have a synergistic effect on the antibacterial and antiviral activity of antimicrobial substances. For example, the addition of linezolid, an oxazolidinone-based antibiotic inhibiting the protein biosynthesis of Gram-positive bacteria, reduced the MIC of MGO against S. aureus in a checkerboard broth microdilution assay by a factor of four in concentrations below its minimal inhibitory concentration (MIC) [16]. Similar effects were observable for S. aureus biofilms when adding Rifampicin to manuka honey or on influenza viruses (H1N1) in Madin–Darby canine kidney cells when adding the antiviral agents Zanamivir or Oseltamivir to manuka honey [17,18]. On the other hand, little can be found in the literature on whether the antibacterial effect of MGO itself can be enhanced by naturally occurring compounds present in manuka honey. It has been shown that the growth delay of S. aureus and *Pseudomonas aeruginosa* were higher when treated with manuka honey in concentrations above 250 mg/kg and α-cyclodextrin, compared with the honey solutions without α-cyclodextrin [15]. Furthermore, the only substances native to (manuka) honey discussed to have an antibacterial effect themselves are syringic acid and 3,4,5-trimethoxybenzoic acid [19]. Manuka and non-manuka honeys also contain polyphenolic compounds in concentrations up to 2967 mg gallic acid equivalents (GAE)/kg, which provide an antioxidant activity in the honey [20,21]. Synthetic derivatives of gallic acid show an antibacterial activity against E. coli, S. aureus and *Bacillus subtilis* [22]. So far, it is unknown if syringic acid, 3,4,5-trimethoxybenzoic acid or naturally occurring polyphenolic compounds enhance the effect of MGO in manuka honey. Since manuka honey is also rich in other organic acids with structural similarities, such as 3-phenyllactic acid (3-PLA), 4-hydroxyphenyllactic acid and 2-methoxybenzoic acid, we hypothesized that these substances can influence the antibacterial effect of manuka honey. As 3-PLA is a marker substance for manuka honey, manuka honey has to contain at least 500 mg/kg 3-PLA per definition [23]. Due to the high 3-PLA contents up to 1400 mg/kg in manuka honey, which are also in the same order of magnitude as MGO, and its relevance as a marker substance for manuka honey, it will be considered in this study [24]. Therefore, the aim of this study was to evaluate possible synergistic effects of MGO, 3-PLA and polyphenols on the antibacterial activity of manuka honey. ## 2.1. Honey Samples In this study, four commercially available manuka honeys, all labeled for the purpose of wound healing treatments; two manuka honey labeled with MGO250+ (containing 270 mg/kg MGO) and MGO400+ (containing 444 mg/kg MGO) and a cornflower honey, as references; and a manuka honey labeled with MGO30+ (containing 72 mg/kg MGO) as a basis for spike experiments were included. After purchase, all samples were stored at 4 °C until analysis. As a blank matrix and for dilution of the honey samples, artificial honey prepared following Deng et al. [ 2018] was used in the assay to obtain constant osmotic pressure [25]. In brief, 44 g of fructose, 37 g of glucose and 2 g sucrose were dissolved in 17 g water. The suspension was slightly heated to 45 °C and stirred until full dissolved. ## 2.2. Chemicals Methanol (HPLC grade) and acetonitrile (LC-MS grade) were purchased from VWR (Darmstadt, Germany). Methylglyoxal, ortho-phenylenediamine, 2-methylchinoxaline, gallic acid, 3-phenyllactic acid, forchlorfenuron, 1,3-dihydroxyacetone-dimer and Folin–Ciocalteu-reagent were obtained from Sigma Aldrich/Merck (Steinheim, Germany). Formic acid, acetic acid, fructose, glucose, sucrose and LB-broth were obtained from Carl Roth (Karlsruhe, Germany). Sodium carbonate, sodium dihydrogenphosphate and disodium hydrogenphosphate were purchased from Grüssing (Filsum, Germany). 3-Deoxyglucosone (3-DG) was synthesized according to Henle and Bachmann [1996] [26]. Double-distilled water (Bi 18E double distillation system, QCS, Maintal, Germany) was used for HPLC solvents. ## 2.3. Quantification of MGO in Manuka Honeys and in Model Solutions The quantification of MGO was performed according to Atrott et al. [ 2012] with slight modifications [5]. Briefly, 150 µL ortho-phenylenediamine (OPD) solution ($1\%$ in phosphate-buffered saline (PBS)) was added to 650 µL of a $2.5\%$ solution on honey in 0.5 M PBS with pH 6.5 or to a mixture of 150 µL model solution and 500 µL PBS, respectively, for an overnight incubation at room temperature in the absence of light. After membrane filtration (0.45 µm), the samples were analyzed via HPLC-UV. The analyses were performed using a system containing a pump, including an online-degasser and a mixing chamber P 6.1 L, an autosampler AS 6.1 L, a column thermostat CT 2.1 and a diode array detector DAD 2.1 L, all from Knauer (Berlin, Germany). The separation of the quinoxalines was achieved on a column filled with Eurospher C-18 material (250 × 4.6 mm, 5 µm particle size, with integrated precolumn, Knauer, Berlin, Germany) as the stationary phase and a mobile phase containing $0.075\%$ acetic acid as solvent A and a mixture of $20\%$ solvent A and $80\%$ methanol as solvent B. The gradient started with $40\%$ solvent B for 1 min, was elevated to $100\%$ B within 20 min, was changed back to $40\%$ B in 4 min and was held there for an additional 5 min. The separation was performed with a flow rate of 0.9 mL/min; at a temperature of 30 °C, 20 µL of the sample was injected. The detection of the peaks was conducted by measuring the UV-absorbance at 312 nm. Quantification was achieved with an external calibration with commercial MGO solution, of which the MGO content was determined after derivatization with OPD by comparison with 2-methylquinoxaline standard. ## 2.4. Extraction of Honey Proteins To extract the high molecular fraction of the honeys containing the honey protein, honey samples were prepared according to the protocol published by Hellwig et al. [ 2017], with slight modifications [27]. Approx. 5 g of honey was dissolved in 10 mL of water and transferred into a dialysis tube (MWCO 14 kDa, Sigma, Steinheim, Germany), followed by dialysis against water. During the 2 days of dialysis, the water was changed twice a day. Afterwards, the retentates were freeze-dried and stored at −18 °C until sampling. About 20 mg of honey protein was obtained of each sample. ## 2.5. Quantification of 3-Phenyllactic Acid in Manuka Honeys To determine the content of 3-phenyllactic acid, the honey samples were prepared according to the protocol published by the NZ Ministry for Primary Industries [2017] with slight modifications [28]. Briefly, 1.5 g honey was dissolved in 10 mL of a solution consisting of acetonitrile, formic acid and water (10:1:90 v/v). The samples were shaken for 20 min with an overhead shaker until complete dissolution. After centrifugation (3000 g, 10 min) and membrane filtration (0.45 µm), 50 µL solution was mixed with 940 µL extraction solution and 10 µL internal standard solution (forchlorfenuron, 10 mg/L in $1\%$ formic acid in acetonitrile). This solution was injected in the HPLC-MS/MS system, consisting of a binary pump G1312A, an online-degasser G1379B, an autosampler G1329A, a column thermostat G1316A and a mass spectrometer with electrospray ion source G6410A, all from Agilent Technologies (Santa Clara, CA, USA). Separation was achieved with a Kinetex 100 C18 column (2.1 × 50 mm, 1.7 µm, with precolumn, Phenomenex, Torrance, CA, USA) as a stationary phase and a mobile phase containing $0.1\%$ formic acid in water as solvent A and $0.1\%$ formic acid in acetonitrile as solvent B. The gradient started with $5\%$ solvent B for 0.75 min, was elevated to $15\%$ B within 1.25 min and $70\%$ B within 2 min before a final increase to $98\%$ B within 2 min. The concentration was held for 2 min, changed back to $5\%$ B within 1 min and finally held for 7 min. The separation was performed with a flow rate of 0.2 mL/min at a temperature of 40 °C. In total, 5 µL of the sample was injected. For quantification, MRM transitions (see Table 1) were recorded and an external calibration with standard solutions of 3-phenyllactic acid (concentrations between 0.025 and 1 mg/L) was used. General working conditions of the mass spectrometer were 350 °C gas temperature, 11 L/min gas flow, 35 psi nebulizer pressure and 4000 V capillary voltage. ## 2.6. Estimation of the Polyphenol Content in Honey via Folin–Ciocalteu Method The determination of total polyphenols was achieved using the method of Singleton et al. [ 1999], with slight modifications [29]. In total, 20 µL of sample solution ($15\%$ w/v in water) or calibration solution (gallic acid, dissolved in water, concentrations ranging between 5 and 200 mg/L) and 100 µL of 0.2 N Folin reagent solution (Sigma Aldrich, Steinheim, Germany) were pipetted into a cavity of a 96-well plate. After 5 min, 80 µL of a 75 g/L sodium carbonate solution was added. After a reaction time of 120 min, the absorbance of the resulting blue dye was measured at 760 nm against a blank value containing water instead of sample or calibration solution. ## 2.7. Determination of the Bacteriostatic Effect of Manuka Honey Solutions against Bacillus subtilis The determination of the bacteriostatic effect of manuka honey solutions against B. subtilis W168 was performed according to Jenkins and Cooper [2012], with modifications [30]. Therefore, an overnight culture was prepared by inoculating approx. 10 mL liquid LB-broth with a colony forming unit of the bacterial strain. The next day, the OD600 was measured of a 1:10 (v/v) solution of the culture against LB-broth with a spectrophotometer UV-3100PC (VWR, Darmstadt, Germany). To evaluate the antibacterial effect, $30\%$ solutions of the honey samples or of artificial honey, respectively, were prepared with liquid LB-broth and sterile filtered (0.2 µm). To realize different amounts of methylglyoxal in the samples, while maintaining the sugar concentration at the same level, honey solutions were diluted with artificial honey solution. This also ensured a constant osmotic pressure in each diluted sample. For the assay, 105 µL of the diluted or undiluted honey sample solution was pipetted into a cavity of a 96-well plate. In total, 105 µL of the solution of artificial honey was used as blank. An aliquot of the 1:10 diluted overnight culture was placed into each cavity, such that the resulting OD600 was calculated to be 0.05. Finally, LB-broth was added to make a volume of 210 µL, such that the assay solution contained $15\%$ (w/v) in total. To prove sterile conditions, 210 µL of liquid LB-broth was tested as well as blank. To obtain growth curves, the samples were incubated at 37 °C for 24 h while being shaken continuously. The OD600 was measured every 5 min using a Biotek EPOCH 2 microplate reader (Agilent, Santa Clara, CA, USA). To calculate the bacteriostatic effect, the end of the bacterial lag phase was defined as the OD600 value 5-fold higher than the initial OD600 value. Assuming a proportionality between the OD600 and the cell count, this factor of 5 corresponds to two-generation growth cycles of the bacteria. To calculate a growth arrest or delay, the time of the lag phase measured for bacterial growth in the honey sample was divided by the lag time obtained for the bacterial growth in artificial honey on the respective 96-well plate as control. The calculated growth delay gives a statement about the bacteriostatic effect, i.e., the increase in the duration of the lag-phase due to the presence of antibacterial compounds. If the bacteria did not grow during the whole measurement, the effect was defined to be bactericidal. ## 3.1. Measurement of the Antibacterial Activity of Honeys To evaluate the antibacterial effect of MGO, *Bacillus subtilis* was chosen as a bacterial model strain. Due to the absence of glutathione in cells of B. subtilis, and the production of bacillithiol instead, it has a lower capacity to detoxify MGO [31]. Therefore, enzymatic intracellular MGO degradation can be neglected. Additionally, the antibacterial activity of hydrogen peroxide formed by glucose oxidase in honeys can be ruled as an antibacterial factor since B. subtilis is able to degrade hydrogen peroxide due to its catalase activity. These assumptions were tested by measuring growth curves of B. subtilis in presence of $15\%$ solutions of artificial honey, cornflower honey and manuka honeys labeled MGO250+ and MGO400+, respectively. Cornflower honey, which is well known for a high glucose oxidase activity, showed a slight inhibiting effect with a growth delay of 2.3 when compared with artificial honey (see Figure 1). Furthermore, the addition of hydrogen peroxide to artificial honey in honey-relevant concentrations did not lead to a delayed growth of B. subtilis significantly (see Figure S1). In contrast, manuka honey MGO250+ showed a growth delay of 5.3. This confirms that hydrogen peroxide only plays a minor role in the antibacterial effect of honey on B. subtilis, and MGO in manuka honey can be assumed as the main inhibiting compound. The inhibiting effect of manuka honey MGO250+ on the growth of B. subtilis is clearly observable (Figure 1). The strength of the antibacterial effect is dependent on the MGO content of the investigated honeys. Higher MGO contents lead to longer a lag phase, after which the bacteria start to grow. The bacteriostatic, but not bactericidal, effect is likely due to chemical or microbial degradation of MGO during the measurement. MGO is able to react within the Maillard reaction with lysine and arginine side chains of the proteins in liquid medium [32], which leads to a reduction of the MGO content. Besides its weak glyoxalase system, B. subtilis may degrade MGO via other pathways, e.g., with aldo-keto reductase to acetol [33]. If the MGO level drops below a certain concentration, the bacteria are able to start growing. On the other hand, in the presence of MGO400+ honey bacteria did not grow during the measurement. For the purpose of this study, this is considered to be a bactericidal effect, even though it cannot be ruled out that the strain would have grown after a longer incubation period. Therefore, this model allows quantification of the antibacterial activity of different honeys. In particular, comparative statements between (manuka) honeys are possible, especially when compared with a reference honey. Principally, this assay is also transferable to other bacterial strains. However, for the evaluation of the bacteriostatic effect, the activity of glyoxalase, catalase and other enzymes responsible for the degradation of antibacterial compounds have to be considered. ## 3.2. Antibacterial Activity of Commercial Manuka Honey Samples To evaluate the antibacterial activity of commercial manuka honey samples, the assay was applied to four commercial manuka honeys which were labeled for wound healing purposes. Besides using the $30\%$ honey solutions for measurements (resulting in a $15\%$ solution in the assay), all honeys were also diluted to $2\%$, $5\%$, $10\%$, $15\%$ and $20\%$ with a $30\%$ solution of artificial honey in liquid medium prior to the addition of the liquid culture and LB-medium. The dilution was obtained with artificial honey to achieve varying MGO levels but to keep constant osmotic pressure in the assays. Thereby, every sample contains the same amount of sugars, which is necessary to compare the antibacterial effect of diluted honey samples [3]. It can be seen that higher MGO contents in the assay lead to higher growth delays (Figure 2, Table S1). In addition, it is noticeable that similar contents of MGO in the honeys did not necessarily lead to the same growth delays and, conversely, no conclusion can be drawn from the growth delay to the MGO content in the assay. Comparing honey one with honey three, for instance, an adjustment to 30 µg MGO per mL (corresponding to a $30\%$ solution of a honey containing 100 mg/kg MGO) resulted in GD values of four and six, respectively. In reverse, to obtain a growth delay of five, a MGO concentration of 24 µg/mL of honey one or 31 µg/mL of honey four is needed. As mentioned above, hydrogen peroxide as a second antibacterial agent in honeys can be excluded due to active catalase in the bacterial cell. Additionally, different pH values of the honeys can also be excluded, since measurements of the antibacterial activity of artificial honey spiked with MGO (pH~6.5) did not differ when the pH value of artificial honey was adjusted to five with gluconic acid. Therefore, neither the glucose oxidase activity nor the pH values of the honeys explain the differences in growth delay. The effect of MGO on bacterial growth has to be enhanced or reduced by other honey ingredients. ## 3.3. Determination of Synergistic Effects in Manuka Honey The next aim of the study was to analyze which compounds, besides MGO, might be relevant for the antibacterial activity. Therefore, artificial honey spiked with MGO was used as a model system, to which potential synergistic substances were added in honey at relevant concentrations. The following substances were chosen as potential synergists: dihydroxyacetone (DHA) as the precursor substance of MGO in manuka honey; isolated manuka honey protein; gallic acid as a representative for phenolic compounds in honey; 3-phenyllactic acid (3-PLA) as a marker substance of manuka honey, occurring in similar concentrations as MGO and considered to have antibacterial effects on Gram-positive bacteria [34]; and 3-desoxyglucosone (3-DG) as another abundant dicarbonyl compound in honey. Except for 3-PLA, none of the compounds tested were found to enhance the antibacterial activity of MGO (see Figures S2 and S3 in the supplementary material). Honey protein especially did not show any particular effect. For further studies, structural changes and a loss of a possible antibacterial effect during the protein extraction should be considered. Whereas 3-PLA in honey-relevant concentrations ranging up to 2000 mg/kg alone or added to artificial honey containing 100 mg/kg MGO did not lead to a delay in the growth curves, adding 3-PLA to artificial honeys with 250 mg MGO/kg or higher clearly increased the antibacterial activity caused solely by MGO. As an example, the addition of 2000 mg/kg 3-PLA to artificial honey with 400 mg/kg MGO increased the growth delay from 4.06 to 5.05. ( Figure 3, Table S2). To simulate real honey samples, the experiment was repeated with a manuka honey naturally containing 72 mg/kg MGO. This manuka honey was spiked with different amounts of MGO up to an additional 400 mg/kg and 3-PLA up to 2000 mg/kg. Concerning the antibacterial activity, the same effect was observed as in the artificial honeys: the addition of 3-PLA resulted in a dose-dependent increased growth delay when MGO concentrations of 322 mg/kg or higher were present, e.g., the addition of 2000 mg/kg 3-PLA to the honey containing 472 mg/kg MGO increased the growth delay from 5.64 to 6.85 (Figure 4, Tables S2 and S4). In the presence of lower MGO concentrations, 3-PLA did not show a growth delay enhancing effect. There are two explanatory approaches for the synergistic effect of 3-PLA with MGO. Firstly, MGO is stabilized by 3-PLA in the medium. MGO is a reactive substance which can react with proteins in the liquid medium. Therefore, the MGO concentration in the assay decreases over time. To test this, MGO was diluted to a concentration of 120 mg/L (corresponding to a $30\%$ solution of a honey containing 400 mg/kg MGO) with LB medium and in LB medium containing 600 mg/L 3-PLA (corresponding to a $30\%$ solution of a honey containing 2000 mg/kg 3-PLA). The samples were incubated without the presence of B. subtilis. The MGO content was analyzed at 0 h, 1 h, 3 h, 5 h, 8 h and 24 h during the 24 h incubation at 37 °C. While the MGO level in the sample without 3-PLA dropped from 120 mg/L to 7 mg/L within 24 h, the MGO content in the sample with 3-PLA decreased to 35 mg/L (Figure 5, Table S3). Therefore, a higher apparent MGO concentration in the assay leads to longer lag-times of the bacteria. The mechanism of how MGO is stabilized by 3-PLA is still unknown. Besides an extracellular effect, intracellular effects might be relevant as well for the synergistic effect of 3-PLA. It has been shown in the literature that 3-PLA in high concentrations (>10 mg/mL) damages or alters the cell wall of Gram-positive bacteria, e.g., Listeria monocytogenes, due to the loss of cell wall rigidity [34]. With regard to manuka honey, 3-PLA might also interact with bacterial cell walls in honey-relevant concentrations (~100 µg/mL) without leading to cell death, but to “softening” the cell wall. This could lead to a higher susceptibility of the cell towards MGO by increasing intracellular MGO concentrations. Besides 3-PLA, gallic acid showed similar properties enhancing the antibacterial effect of MGO. When added to artificial honeys containing no MGO, gallic acid did not have a growth delaying effect. On the other hand, when added to artificial honey containing 250 mg MGO/kg or more, higher levels of gallic acid lead to a higher growth delay at the same MGO concentration (Figure 6). Additionally, in the artificial honey containing 400 mg/kg MGO, the increase in gallic acid from 1500 mg/kg to 2000 mg/kg resulted in a bactericidal effect. In this assay, gallic acid was used as a representative for the polyphenolic compounds in (manuka) honey. Despite the fact that honeys contain rather small amounts of up to 66 mg/kg of gallic acid [35], the content of polyphenols expressed as gallic acid equivalents (GAE) is within the range of our model honeys, as manuka honeys containing 2170 mg GAE/kg have been described in the literature [20]. In order to check whether these results are an explanation for the differences in the antibacterial properties between honeys with similar MGO contents (Figure 2), the contents of 3-PLA and polyphenols, expressed as gallic acid equivalents (GAE), were measured (Table 2). It is noticeable that the two honeys with the highest 3-PLA and GAE content are also the honeys with the highest growth delay against B. subtilis. As an example, to obtain a growth delay of about five with “honey 1” (containing 734 mg/kg 3-PLA and 636 mg/kg GAE), a MGO concentration of 24 µg/mL is needed, while “honey 4” (334 mg/kg PLA and 386 mg/kg GAE) obtains a growth delay of five at 31 µg/mL, which might be due to different amounts of 3-PLA and GAE in the assay. Nevertheless, the exact quantitative contribution to the antibacterial effect of manuka honey, especially in models containing more than one synergist, and the specific mechanism of action of the synergistic effect have to be investigated in further studies. To verify the synergistic effect of 3-PLA, the antibacterial activities of a manuka honey containing 259 mg/kg MGO and 467 mg/kg 3-PLA, an artificial honey spiked to the same MGO concentration and another spiked artificial honey with the same MGO and 3-PLA concentrations were compared. It was confirmed that 3-PLA enhances the effect of MGO above a concentration of 34.1 µg/mL, corresponding to a $30\%$ solution of a honey containing 113 mg/kg MGO. ( Figure 7, Table S5) Restrictively, this enhancing effect is not enough to reach the antibacterial level of the commercial manuka honey sample. Whereas a MGO concentration of 27.3 µg/mL obtained by MGO-spiked artificial honey leads to a growth delay of 3.2 without 3-PLA and 3.4 in the presence of 3-PLA, the same MGO level obtained by manuka honey leads to a growth delay of 6.2 (Figure 7). Therefore, it can be concluded that polyphenolic compounds, as well as additional compounds, presumably other organic acids such as 4-hydroxyphenyllactic acid, 2-methoxybenzoic acid, syringic acid or 3,4,5-trimethoxybenzoic acid, or yet unknown substances in manuka honey, enhance the effect of MGO. ## 4. Conclusions For the first time, we demonstrated that 3-phenyllactic acid, a marker compound of manuka honey occurring in concentrations up to 1400 mg/kg, enhances the antibacterial activity of artificial honeys containing 250 mg/kg MGO or more against the model bacterium B. subtilis, even if 3-phenyllactic acid alone does not show any bacteriostatic effect in honey-relevant concentrations. This is due to higher apparent MGO concentrations in the assay due to MGO stabilization by 3-PLA. Additionally, first results indicate that polyphenols, tested with gallic acid as a substitute with similar chemical properties, also enhance the antibacterial effect of MGO. Therefore, 3-PLA and polyphenols as synergists of MGO may also be considered for the qualitative evaluation of manuka honey besides their chemical benefits. Nevertheless, the differences in the antibacterial activities between artificial honey spiked with MGO and commercial manuka honey with the same amount of MGO cannot be explained uniquely by 3-PLA and GAE. Other substances in (manuka) honey appear to enhance the effect of MGO as well. The results of this study contribute to the understanding of the antibacterial mechanism of MGO itself and in manuka honey. The results further support the application of manuka honey beyond its use as food item, e.g., to the use of manuka honey for functional and medical applications such as wound healing. However, the assay presented in this study should be transferred to other bacteria, especially those without a proper glyoxalase system, such as S. aureus [31], as well as to other Gram-positive and Gram-negative bacteria and other microbiota to obtain more general data about the mechanism of action of the antimicrobial activity of methylglyoxal and manuka honey.
# Peptidomics Study of Plant-Based Meat Analogs as a Source of Bioactive Peptides ## Abstract The demand for plant-based meat analogs (PBMA) is on the rise as a strategy to sustain the food protein supply while mitigating environmental change. In addition to supplying essential amino acids and energy, food proteins are known sources of bioactive peptides. Whether protein in PBMA affords similar peptide profiles and bioactivities as real meat remains largely unknown. The purpose of this study was to investigate the gastrointestinal digestion fate of beef and PBMA proteins with a special focus on their potential as precursors of bioactive peptides. Results showed that PBMA protein showed inferior digestibility than that in beef. However, PBMA hydrolysates possessed a comparable amino acid profile to that of beef. A total of 37, 2420 and 2021 peptides were identified in the gastrointestinal digests of beef, Beyond Meat and Impossible Meat, respectively. The astonishingly fewer peptides identified from beef digest is probably due to the near-full digestion of beef proteins. Almost all peptides in Impossible Meat digest were from soy, whereas $81\%$, $14\%$ and $5\%$ of peptides in Beyond Meat digest were derived from pea, rice and mung proteins, respectively. Peptides in PBMA digests were predicted to exert a wide range of regulatory roles and were shown to have ACE inhibitory, antioxidant and anti-inflammatory activities, supporting the potential of PBMA as a source of bioactive peptides. ## 1. Introduction A growing global population poses critical challenges in sustaining protein supply under already constrained resources and alarming concerns over climate change. Among various strategies towards sustainable protein production such as cellular agriculture (i.e., cultured meat), alternative proteins (i.e., terrestrial plant, insect and seaweed) and valorization of agricultural by-products [1,2], developing plant-based meat analogs (PBMA) is an attractive solution to replace traditional livestock production [3]. The market shares of alternative proteins remain low when compared with meat, even though governments and innovative companies increasingly advertise these alternatives to traditional meat products or dishes, such as plant-based burgers [4]. One major hurdle is consumer acceptance; in comparison, insects showed the lowest acceptance, followed by cultured meat, while terrestrial plant-based alternatives have the highest acceptance level [5]. The consumer acceptance of alternative proteins showed to be closely relevant to the drivers of taste and health, the color and aroma inherited, familiarity, food neophobia and disgust [1,2]. Since the successful launch of Beyond Meat and Impossible Meat, the market of PBMA has been on the rise; the global plant protein-based meat market is estimated to be approximately USD 21 billion by 2025 [6]. From a nutritional point of view, PBMA has unique advantages: its negligible cholesterol content, low fat content and high protein content with a well-balanced amino acids pattern [7,8]. McClements et al. reported that PBMA burgers contained fewer calories, cholesterol and fat than conventional beef burgers, despite nearly equal protein content [9]. However, there are continuous debates over the health implications of PMBA due to the addition of additives and the use of highly processed ingredients [3,8]. The health benefits of plant foods are likely compromised in PBMA. There is a need to develop clean labels and minimally processed products. For instance, the clean-labelled ProDiem™Refresh *Soy is* characterized by its sustainable and optimized nutrients to simulate/fulfill a protein intake similar to egg/milk [10]. A wide range of alternative proteins is explored for use in PBMA, especially those from grains and legumes, such as soy, pea, wheat, mung and lentil [11]. However, terrestrial plant proteins commonly possess inferior digestibility to that of livestock proteins, which challenges the nutritional profile of protein in meat analogs [12]. For example, Xie et al. reported that real meat (pork and beef) exhibited higher digestibility than that of PBMA during simulated gastrointestinal digestion, and the digestibility of PBMA depends on the origin and structure of proteins as well as the method of protein processing [13]. Food proteins are known as good sources of bioactive peptides. Bioactive peptides usually consist of 2–20 amino acids in length that are encrypted in their parent proteins and can exert regulatory roles once released in certain scenarios, including the gastrointestinal tract [1]. Given its increasing role in human dietary patterns, it is imperative to understand the potential of PBMA as the precursor of bioactive peptides. For instance, Chen et al. showed the formation of higher molecular weight and higher hydrophobicity in PBMA-derived peptides (soy and wheat proteins) than in chicken breast [14]. Xie et al. reported a larger number of peptides were identified from real meat than those of PBMA after simulated gastrointestinal digestion [13]. However, PBMA used in previous studies was prepared experimentally; research on commercial PBMA, especially from Beyond Meat and Impossible Meat, two major producers, are rarely reported. Simultaneously, systematic studies on the gastrointestinal fate, especially peptide profile and bioactivities after gastrointestinal digestion of PBMA, are still insufficiently understood. Meanwhile, there is no doubt that the peptide fragments released from real meat and PBMA are diverse due to their different parent protein sequences. Thus, the potential health benefits of these peptide fragments released from real meat and PBMA will also differ. Additionally, peptidomics and bioinformatics are emerging tools for identifying and predicting peptide profiling, bioavailability and bioactivity of bioactive peptides [15]. Hence, exploration of the digestibility and peptide profile after gastrointestinal digestion with the aid of peptidomics and bioinformatics will facilitate our understanding of the potential health benefits of PBMA. The purpose of this study was to compare the in vitro gastrointestinal digestion fate of beef and PBMA (from Beyond Meat and Impossible Meat) with a special focus on their potential as precursors of bioactive peptides through assessing digestibility and peptide profiles and evaluate the relationship between peptide features and biofunctions (angiotensin-converting enzyme (ACE) inhibition, antioxidant and anti-inflammation). ## 2.1. Materials Cooked patties of beef hamburger and Beyond Meat hamburger were bought from A&W (Edmonton, Alberta, Canada), and cooked patties of Impossible Meat burger were bought from Burger King (Edmonton, AB, Canada). ACE (from rabbit lung), hippuryl-His-Leu (HHL), pepsin (porcine gastric mucosa), pancreatin (porcine pancreas), 2,4,6-trinitrobenzenesulfonic acid (TNBS), cytochrome C, aprotinin, vitamin B12, (glycine)3, dithiothreitol (DTT) and angiotensin II (Ang II) were obtained from Sigma (Oakville, ON, Canada). Vascular smooth muscle A7r5 cell line was purchased from ATCC (Manassas, VA, USA). Dulbecco’s modified Eagle’s medium (DMEM), fetal bovine serum (FBS), 4-(2-68 hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES) and non-essential amino acids (NEAA) were obtained from Gibco Invitrogen (Burlington, ON, Canada). Dihydroethidium (DHE) was purchased from Biotium (Fremont, CA, USA). Solvents used for UPLC were of chromatographic grade. Other chemicals applied were of analytical grade. ## 2.2. Preparation of Beef and PBMA Gastrointestinal Digests The cooked beef patties and plant-based patties (Beyond Meat and Impossible Meat) in this study were bought from stores. Minced beef and PBMA were suspended in ddH2O and then exposed to two-step simulated gastrointestinal digestion [16]. Briefly, beef and PBMA ($5\%$ protein, w/v) were hydrolyzed by pepsin ($1\%$ protease/substrate, w/w protein) at pH 2.0 and 37 °C for 2.0 h, and then the digests were adjusted to pH 7.5 for another 2.0 h of hydrolysis with pancreatin ($1\%$ protease/substrate, w/w protein). Hydrolysis was terminated by heating the slurry at 95 °C for 10 min to inactive the proteases. Subsequently, the mixtures were centrifuged (8000× g, 15 min, 4 °C) to collect the supernatants, which were filtered by qualitative filter paper before being lyophilized to obtain the hydrolysates including BfP (cooked beef-pepsin), BfPP (cooked beef-pepsin-pancreatin), ByP (cooked Beyond Meat-pepsin), ByPP (cooked Beyond Meat-pepsin-pancreatin), ImP (cooked Impossible Meat-pepsin) and ImPP (cooked Impossible Meat-pepsin-pancreatin). ## 2.3. Molecular Weight Distribution The molecular weight distribution of beef and PBMA hydrolysates were performed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and size exclusion chromatography according to the methods of Laemmli et al. [ 17] and Fan et al. [ 18], respectively. Briefly, for SDS-PAGE, beef and PBMA hydrolysates were initially dissolved in water at a concentration of 10 mg/mL and then diluted using a 2 × Laemmli sample buffer containing $5\%$ β-mecaptoethanol at a volume ratio of 1:1. The prepared beef and PBMA hydrolysates were heated to 95 °C for 5 min before 20 µL of them were loaded to $16.5\%$ Mini-Protean Tris-Tricine gel in a Mini-PROTEAN Tetra Cell with a PowerPac *Basic electrophoresis* apparatus (Bio-Rad, CA, USA) at a constant 150 V voltage. Gels were stained by Coomassie brilliant blue R250 dye and further destained by destaining buffer (ddH2O:methanol:acetic acid = 5:4:1, v/v/v), and then were scanned through an Alpha Innotech gel scanner (San Leandro, CA, USA). On the other hand, the molecular weight distribution was analyzed by size exclusion chromatography connecting with an AKTA explorer 10XT system (GE Healthcare, Uppsala, Sweden) with a Superdex peptide $\frac{10}{300}$ GL column. Beef and PBMA hydrolysates were dissolved in $30\%$ ACN containing $0.1\%$ TFA. Subsequently, 100 µL beef and PBMA hydrolysates at a concentration of 1 mg/mL were injected into the Superdex peptide $\frac{10}{300}$ GL column and eluted at an isocratic gradient with a flow rate of 0.5 mL/min. Peaks were monitored at 220 nm. The molecular weight was calibrated by a protein marker mixture in SDS-PAGE, whereas aprotinin, cytochrome C, (glycine)3 and vitamin B12 were used as molecular weight markers in size exclusion chromatography. ## 2.4. Degree of Hydrolysis (DH) and Amino Acid Compositions The DH of beef and PBMA hydrolysates were evaluated using the TNBS method [19]. The amino acids analysis of beef and PBMA hydrolysates were determined according to the method of Zheng et al. [ 20]. ## 2.5. Identification of Peptides by LC-MS/MS The gastrointestinal-digested beef and PBMA hydrolysates were analyzed by liquid chromatography-tandem mass spectrometry (LC-MS/MS) on an Atlantis dC18 UPLC column (Waters, Milford, MA, USA) using a nano-Acquity RP-UPLC system, coupled with a Micromass Quadrupole Time-of-Flight (Q-TOF) premier mass spectrometer (Bruker, Bremen, Germany), as previously described [16]. Solvents were chromatographic grade acetonitrile (mobile phase B) and H2O (mobile phase A) containing $0.1\%$ formic acid. The gradient program was set as $1\%$–$60\%$–$95\%$ mobile phase B according to 0–2–40–55 min. Mass spectra were set in the positive-ion mode. The quadrupole ion energy was set at 4.0 eV, while the collision-inducing dissociation energy was set at 8–50 eV. The parameters for the ESI interface were as follows: 180 °C drying gas temperature, 8.0 L/min drying gas flow and 1.5 bar ESI nebulizer pressure. Data were interpreted by searching Mascot. The major parent protein sequences of beef, pea, soy, mungbean, rice and potato were obtained from the UniProtKB [21]. ## 2.6. ACE Inhibition Assay ACE inhibition was measured by referring to the method of Wu et al. [ 22]. ACE, HHL, beef and PBMA hydrolysates were dissolved and diluted with 100 mM potassium phosphate buffer containing 300 mM NaCl (pH 8.3). Substrate HHL (50 μL, 5 mM) and beef/PBMA hydrolysate (10 μL) were initially mixed and preincubated at 37 °C for 5 min in a 2 mL polypropylene centrifuge tube, and then 20 μL of preincubated ACE (37 °C, 2 mU) was added and reacted for another half an hour by an Eppendorf Thermomixer R (Brinkmann Instruments, NY, USA). The reaction was terminated by further adding 1 M HCl (125 μL) and then analyzed using an UPLC system combined with an Acquity BEH C18 column (1.7 μm, 2.1 mm × 50 mm). Solvents were chromatographic grade acetonitrile (mobile phase B) and H2O (mobile phase A) containing $0.05\%$ formic acid. Samples (5 μL) were eluted at a flow rate of 0.245 mL/min, and the gradient program was set as $5\%$–$60\%$–$60\%$–$5\%$ B according to 0–3.5–4.2–5 min. Absorbance was monitored at 220 nm. Hippuric acid was identified and quantified through its standard curve. The IC50 value represents the concentration of PBMA hydrolysates when inhibiting ACE activity by $50\%$. ## 2.7. Desalting Protocol, Cell Culture and Cytotoxicity Before incubation with A7r5 cells, beef and PBMA hydrolysates were desalted according to the method described previously by Fan et al. [ 18]. Briefly, beef and PBMA hydrolysates were dissolved in ddH2O and then loaded into a Sep-Pak 35cc tC18 cartridge (Waters, MA, USA). Firstly, the cartridge was washed with ddH2O at the volume of two column volumes for salt removal. Subsequently, ACN was added to wash the cartridge, and the ACN eluent was collected, vacuum evaporated and freeze-dried. A7r5 cells were cultured with DMEM medium containing $10\%$ FBS, 25 mM HEPES and $1\%$ penicillin-streptomycin in a cell incubator at 37 °C, $5\%$ CO2 and $100\%$ humidity. The culture media were changed every two days. The cytotoxicity of beef and PBMA hydrolysates against A7r5 cells was measured through an alamarBlue assay, as depicted by Fan et al. [ 18]. A7r5 cells were initially sown in a 96-well plate, and cells were treated with 1.0 mg/mL of beef and PBMA hydrolysates for 24 h when reaching $80\%$ of confluency, and then the medium was replaced with 200 µL of $10\%$ alamarBlue solution for another 4 h. Finally, the solution (150 µL) was transferred into an opaque 96-well plate for fluorescence signal detection, with an emission wavelength at 590 nm and excitation wavelength at 560 nm. ## 2.8. Superoxide Detection Superoxide in A7r5 cells was investigated by the Dihydroethidium (DHE) staining method [23]. A7r5 cells were pre-incubated with hydrolysates (1.0 mg/mL) for 1 h before the addition of Ang II (1 µM) for 0.5 h. Subsequently, DHE (20 µM) was added and treated for another 30 min. After that, cells were triple-washed with non-phenol-red DMEM, and the fluorescence intensity was measured by an Olympus IX81 fluorescent microscope (Olympus, Tokyo, Japan). Each data was comprised of two or three random fields. The mean fluorescence intensity was obtained using ImageJ software (National institutes of health, Bethesda, MD, USA). ## 2.9. Western Blotting A7r5 cells were pre-incubated with beef and PBMA hydrolysates (1.0 mg/mL) for 1 h before adding Ang II (1 µM) for 24 h. After the treatment, cells were scraped and lysed in boiling Laemmle’s buffer containing 50 mM DTT and $0.2\%$ Triton-X-100, and then cell samples were loaded onto a $9\%$ separating gel and transferred to a nitrocellulose membrane for specific antibodies incubation. Bands of cyclooxygenase-2 (COX-2; Abcam, Toronto, ON, Canada) and inducible nitric oxide synthase (iNOS; BD Biosciences, San Jose, CA, USA) were normalized to GAPDH (ab8245, Abcam). The fluorescent bands were visualized by adding corresponding secondary antibodies, and the signals were detected using Licor Odyssey BioImager (Licor Biosciences, Lincoln, NE, USA). ## 2.10. Statistical Analysis SPSS 17.0 (SPSS Inc., Chicago, IL, USA) was applied to statistical treatment with ANOVA analysis followed by the Duncan post hoc test. Data were expressed as mean ± standard deviation. Differences were considered statistically significant at $p \leq 0.05.$ ## 3.1. Molecular Weight Distribution, DH and Amino Acid Compositions of Beef and PBMA Digests Figure 1A shows that the pepsin and/or pancreatin treatments cause a substantial decrease/disappearance in the intensity of large-molecular-weight protein bands, which is due to the degradation of proteins into peptides/free amino acids. Likewise, the results of size exclusion chromatography further demonstrated that the small-molecular-weight fractions in beef and PBMA hydrolysates increased rapidly from gastric digestion to the intestinal digestion phase (Figure 1B), which dominated peptide composition in BfPP, ByPP and ImPP due to further extensive hydrolysis. Furthermore, DH data were consistent with the results shown in SDS-PAGE and size exclusion chromatography (Table 1). The DH of ByPP and ImPP increased gradually during in vitro digestion, being $4.92\%$ and $6.09\%$ after gastric digestion, and further increased to $7.94\%$ and $7.48\%$ after intestinal digestion, respectively. Beef hydrolysate had higher DH than PBMA throughout digestion. The gastrointestinal digestion fate of real meat and PBMA are hypothesized to be different due to the diversities in the structures and compositions of the raw material. Particularly, PBMA contains different sources of proteins as compared with real meat, as well as a variety of food additives which may affect protein digestion [24,25]. Moreover, the processing technologies in PBMA production may result in the formation of structures that negatively impact protein digestion. For instance, the dense mesh structure or aligned fibrils of proteins formation under the thermal–mechanical treatment largely impair the digestibility of proteins in PBMA [26]. A better swelling capacity of beef promotes penetration of gastrointestinal proteases, whereas the bulkiness of storage proteins, protein aggregates and the presence of antinutritional factors in beans limit the digestion of PBMA [27]. Our results are consistent with previous research. For instance, Xie et al. demonstrated that real pork and beef showed higher digestibility than PBMA [13], and the study of McClements et al. also reported the inferior digestibility of PBMA [8]. Amino acid composition is an indicator of the nutritional value of protein hydrolysates [28]. Essential amino acids (EAA) refer to amino acids which cannot be auto-synthesized by the human body, or the rate of synthesis is inadequate to meet the biological needs of the body. Thus, they need to be supplied by food protein intake. Normally, Val, Leu, Ile, Phe, Lys, His, Thr and Met are considered the eight EAA of individuals. Table 1 and Figure 2 shows that the total amino acid compositions of the three hydrolysates ranged from 67.56–87.64 g/100 g. Gastrointestinal digestion of beef had the highest content of amino acids, whereas ByPP and ImPP had a relatively low content of amino acids. However, the content of EAA in PBMA hydrolysate was comparable to the beef counterpart. The contents of EAA in ByPP and ImPP were 42.91 g/100 g and 41.60 g/100 g, whereas a higher value (46.04 g/100 g) was found in BfPP. EAA cannot be synthesized by mammals and must be obtained from food. EAA have important regulatory effects in many physiological events [29,30]. On the other hand, PBMA hydrolysates also contain a high level of non-EAA. Of which, Glu, Gly and Ala were abundant in beef hydrolysate, whereas Asp and Arg content was lower. In particular, no Glu was detected in PBMA hydrolysates. There is no compelling evidence to support that synthesis of non-EAA in the body could satisfy the requirement of physiological activities [31]. Thus, the content of non-EAA should still be taken into consideration when evaluating the nutritional value of proteins. From the amino acids profile, PBMA hydrolysates were expected to possess comparable nutritional properties to that of beef hydrolysate. ## 3.2. Effects of Gastrointestinal Digestion on Peptide Profiles of Beef and PBMA Hydrolysates LC-MS/MS was used to identify the peptide profiles of beef and PBMA hydrolysates in this study, with the purpose of following the generation of peptides during in vitro gastrointestinal digestion and their relationship with bioactivities. To identify the potential bioactive peptides and predict their chemical properties, peptidomics and bioinformatics approaches were applied. Additionally, a peptide fragment may recur multiple times in its parent protein sequences, which can impact the theoretical content of peptides; therefore, this variation was also considered. A total of 37, 2420 and 2021 peptides were identified in BfPP, ByPP and ImPP, respectively, indicating that gastrointestinal digestion had a significant impact on peptide release (Figure 3A). Among them, the abundant peptide fragments in ByPP were mainly derived from pea protein ($81\%$), followed by rice protein ($14\%$) and mung protein ($5\%$). Almost all peptides identified in ImPP originated from soy protein. These results were consistent with the declaration of protein origins in their formulas. Even though beef hydrolysate had the highest DH, surprisingly, much fewer peptides were identified therein. This is probably because beef protein is more easily digested into free amino acids by gastrointestinal proteases or beef-derived peptides showing stronger hydrophilic properties, which were washed away from the reverse phase column prior to sequence identification. It is worth noting that the amino acid composition among proteins largely dictates the extent of digestion, such as Phe, Tyr, Trp Lys, and Arg, which are the cleavage sites of gastrointestinal proteases [32]. Unfortunately, the peptide fragments released from in silico hydrolysis (pepsin and trypsin) in Supplementary Table S1 show a weak correlation with peptides identified by LC-MS/MS, suggesting the gaps between in silico hydrolysis and actual enzymatic hydrolysis. Particularly, in silico mimic hydrolysis is performed under ideal conditions where all proteins are fully digested, whereas the food matrix and processing conditions have a major impact on the digestibility of food proteins. Similarly, discrepancies between virtual and actual hydrolysis were also reported by others [33,34]. Generally, it is normally accepted that small peptides in protein hydrolysates possess better biological activities [16,24]. PeptideRanker is widely used to predict the potential bioactivity of peptides. A total of 5, 798 and 555 potent peptides were selected from BfPP, ByPP and ImPP based on the following filter conditions: peptide length < 20, molecular weight <3 kDa and PeptideRanker scores >0.2. Parent proteins, peptide sequences, repeat numbers, PeptideRanker scores, CPPpred scores, potential bioactive peptides and biological function of these potent peptides are listed in Supplementary Tables S2–S7. Additionally, Figure 3B shows the distribution of selected peptides in each sample according to their protein origins. Globulin, including legumin and vicilin, is one of the major storage proteins in peas [10]. Almost half of the peptides that occurred in ByPP were from legumin and vicilin in peas. The remaining half was also derived from other storage proteins such as provicilin and convicilin in peas, glutelin and globulin in rice and globulin and glycinin in mung. On the other hand, peptides identified from ImPP were mostly derived from glycinin and conglycinin. Additionally, the number of small peptides (peptide length < 10) released by gastrointestinal proteases were 222 and 166 in ByPP and ImPP, accounting for $35.24\%$ and $29.96\%$ of the total peptides identified, respectively. Peptides released in ByPP were repeated more frequently than those in ImPP. The potential bioactivities of peptides were predicted by calculating molecular weight, PeptideRanker scores and CPPpred scores. PeptideRanker is used to predict peptide bioactivities, and CPPpred predicts the ability of a peptide to go across the cell membrane [34]. As shown in Figure 3D, peptides in ImPP have higher PeptideRanker scores than those in ByPP. Additionally, most peptides in ByPP and ImPP had strong cell penetration capacity. These results indicated that gastrointestinal digestion could effectively release bioactive peptides from PBMA. Recently, lifestyle-related chronic diseases have triggered a series of global public health concerns, leading to growing interest in researching food bioactives, including bioactive peptides, as alternatives for treatment. To further clarify and predict the potential biological functions of beef and PBMA hydrolysates, the screened peptides with active probability were compared to the reported active sequences in the BIOPEP database (Supplementary Tables S2–S7). Peptides shared the same sequence with the reported bioactive sequences in the BIOPEP database, implying that they exhibit the same biological functions. Bioactive peptides in PBMA hydrolysates were predicted to exert a wide range of regulatory roles, including amelioration of cardiovascular diseases (including hypertension, diabetes, obesity and hyperlipemia), antioxidation, anti-inflammation, anticancer and neuroprotection (Figure 3E). Taken together, our results suggest that PBMA is a good precursor of bioactive peptides with various biological functions. ## 3.3. ACE Inhibition, Antioxidant and Anti-Inflammation of Beef and PBMA Hydrolysates After predicting the bioactivities of peptides identified from beef and PBMA digests, we further determined ACE inhibitory, antioxidant and anti-inflammatory activities. ACE is a target of blood pressure reduction [35], and amelioration of oxidative stress and inflammatory responses have been considered key preventive strategies against various chronic diseases [36,37,38]. Hypertension is widely known as a risk factor for cardiovascular diseases, and the renin–angiotensin system (RAS) plays a pivotal role in blood pressure regulation [39]. ACE activates the RAS and converts angiotensin (Ang) I into Ang II, which is a potent vasoconstrictor to trigger hypertension. Figure 4 shows in vitro ACE inhibition of beef and PBMA digests. ByPP showed the highest ACE inhibition, with an IC50 value of 0.16 ± 0.03 mg/mL, followed by that of ImPP and BfPP (IC50: 0.20 ± 0.05 and 0.26 ± 0.05 mg/mL, respectively). Evidently, the results of ACE inhibition were consistent with the biological function prediction by in silico approach (Figure 3E). Similarly, a previous study also showed that PBMA-derived digests showed ACE inhibitory activity [13]. Oxidative stress triggers various kinds of damage to cells and further disrupts cellular function [40]. Sustained and aberrant oxidative stress contributes to vascular dysfunction, thereby causing hypertension, type 2 diabetes, atherosclerosis and other chronic diseases [41]. Vascular smooth muscle cells (A7r5) are a well-established model for evaluating health benefits, including relief of vascular dysfunction, anti-inflammation and antioxidation. In this study, antioxidant and anti-inflammatory activities in Ang II-induced A7r5 cells were studied. All hydrolysates showed no cytotoxicity against A7r5 cells. Treatment of beef and PBMA hydrolysates significantly lowered superoxide levels in Ang II-stimulated A7r5 cells, especially for ByPP and ImPP (Figure 5). Fan et al. found that spent hen-derived peptides exhibited antioxidant effects by acting as direct radical scavengers or mediating endogenous antioxidant enzymes in Ang II-stimulated A7r5 cells [42]. Similarly, egg white-derived peptide IRW was also demonstrated to exhibit an antioxidant effect in A7r5 cells against Ang II stimulation [39]. In our study, the remarkable inhibition of superoxide generation ($p \leq 0.05$) in A7r5 cells indicated that PBMA was a good precursor of antioxidant peptides. Vascular inflammation is an underlying cause of hypertension and cardiovascular diseases. COX2 and iNOS are two proinflammatory mediators in vascular smooth muscle cells [38]; thus, the expression of these two proteins in A7r5 cells was detected to evaluate the anti-inflammatory activity of beef and PBMA hydrolysates. As shown in Figure 4, iNOS and COX2 expression levels surged in A7r5 cells upon Ang II insult ($p \leq 0.05$), whereas the hydrolysates treatment significantly inhibited their protein expressions. Similarly, peptides VVHPKESF and IRW could attenuate Ang II-induced inflammation in A7r5 cells [43,44]. These findings suggested the formation of anti-inflammatory peptides by gastrointestinal digestion from PBMA. ## 4. Conclusions This study mimicked the protein digestion of beef and PBMA through an in vitro gastrointestinal tract and further investigated the peptide profile and biological bioactivity by combining peptidomics, bioinformatics and wet lab experiments. Results obtained in SDS-PAGE, size exclusion chromatography and DH showed that gastrointestinal proteases were able to degrade beef and PBMA proteins. Notably, PBMA protein exhibited inferior digestibility than that of beef, as reported previously. From the amino acids profile, PBMA hydrolysates were expected to possess comparable nutritional properties to beef hydrolysate. A total of 37, 2420 and 2021 peptides were identified in the gastrointestinal digests of beef, Beyond Meat and Impossible Meat, respectively. The astonishingly fewer peptides identified from beef digest is probably due to the near-full digestion of beef proteins. The analysis of peptide profiles indicated that PBMA could be considered a good precursor of bioactive peptides with widespread biological functions, including amelioration of cardiovascular diseases (including hypertension, diabetes, obesity and hyperlipemia), antioxidation, anti-inflammation, anticancer and neuroprotection. Furthermore, PBMA hydrolysates exhibited great ACE inhibition, antioxidant and anti-inflammation in test tube experiments and A7r5 cells. The current results underscored the promise of generating bioactive peptides from PBMA.
# Indigenous Eye Health in the Americas: The Burden of Vision Impairment and Ocular Diseases ## Abstract Review of the burden of vision impairment and blindness and ocular disease occurrence in Indigenous Peoples of the Americas. We systematically reviewed findings of the frequency of vision impairment and blindness and/or frequency of ocular findings in Indigenous groups. The database search yielded 2829 citations, of which 2747 were excluded. We screened the full texts of 82 records for relevance and excluded 16. The remaining 66 articles were examined thoroughly, and 25 presented sufficient data to be included. Another 7 articles derived from references were included, summing a total of 32 studies selected. When considering adults over 40 years old, the highest frequencies of vision impairment and blindness in Indigenous Peoples varied from $11.1\%$ in high-income North America to $28.5\%$ in tropical Latin America, whose rates are considerably higher than those in the general population. Most of the ocular diseases reported were preventable and/or treatable, so blindness prevention programs should focus on accessibility to eye examinations, cataract surgeries, control of infectious diseases, and spectacles distribution. Finally, we recommend actions in six areas of attention towards improving the eye health in Indigenous Peoples: access and integration of eye services with primary care; telemedicine; customized propaedeutics; education on eye health; and quality of data. ## 1. Introduction Vision impairment and blindness are estimated to affect more than 339 million people worldwide, with 43.3 million people blind and 295.3 million people having moderate to severe visual impairment (MSVI), representing a prevalence of 5.25 cases of blindness per 1000 persons ($95\%$ CI: 4.58–5.87) and 35.8 cases of MSVI per 1000 persons ($95\%$ CI: 32.4–39.2) [1]. Cataract, glaucoma, under-corrected refractive errors, age-related macular degeneration, and diabetic retinopathy are the main causes of blindness, while the main causes of MSVI are uncorrected refractive errors, cataract, age-related macular degeneration, glaucoma, and diabetic retinopathy [2]. In the Americas, the estimates vary substantially across the different Global Burden of Disease (GBD) regions, with blindness estimates ranging from 1.93 cases per 1000 people in southern Latin America (i.e., Argentina, Chile, and Uruguay) to 7.40 cases per 1000 in tropical Latin America (i.e., Brazil and Paraguay) [1]. Most global estimates, however, do not include data from Indigenous Peoples and other ethnic groups, even though those groups are expected to present higher frequencies of ocular diseases and vision loss [3,4,5]. As a result, the burden of vision impairment and blindness may be underestimated, and the public health policies derived from it may insufficiently attend the demand in those minority groups. Including those groups in population-based sample sizes is often challenging due to the low number of individuals in comparison with the overall population and/or due to the low response from those specific groups even when they are included in the sampling [6,7]. Developing and implementing services designed to prioritize reaching groups in situations of vulnerability, as with Indigenous Peoples, with quality and affordable eye services was recently listed as one of the main challenges in global eye health [8]. Indigenous individuals can, on certain occasions, be considered one of the most disadvantaged and marginalized populations worldwide [9]. A recent systematic review of vision loss among Indigenous populations has shown a lack of data on the burden of vision loss in most countries and has pointed out the importance of improving the quality and number of research about eye health and eye care in Indigenous communities [10]. Different Indigenous groups from different nations have unique characteristics in language, culture, environmental risk factors, and political autonomy, yet, as a result of the colonization process, many face similar health disparities and social disadvantages [11]. Indigenous groups currently account for around $17\%$ of those living in extreme poverty in Latin America, even though they represent less than $8\%$ of the population [12]. It is estimated that in 2010 there were at least 44.8 million Indigenous persons in Latin America, representing 826 Indigenous Peoples mainly concentrated in Mexico (seventeen million people) and Peru (seven million), followed by Guatemala and Bolivia (six million in each) [13]. While they are the majority of the population in Bolivia ($62\%$) and Guatemala ($60\%$), they represent less than $2\%$ in Brazil, Colombia, Venezuela, and the Caribbean [14,15]. In the United States (USA), 6.6 million Alaskan Natives and Native American Indians ($2\%$ of the general population) live in 567 tribes, and 326 Indian reserves are officially recognized by the federal government [16]. In Canada, $5\%$ of the total population is identified as Indigenous, summing 1.8 million individuals from First Nations, Métis, and Inuit groups [17]. The purpose of the current study is to conduct a review of the burden of vision impairment and blindness and ocular disease occurrence in the Indigenous Peoples of the Americas while comparing it to the estimates based on non-indigenous populations and identifying gaps in the literature. ## 2. Materials and Methods We systematically reviewed findings on frequencies of vision impairment and blindness or frequencies of ocular findings such as cataract, under-corrected refractive errors, glaucoma, age-related macular degeneration, diabetic retinopathy, pterygium, trachoma, and onchocerciasis in Indigenous populations in Americas. We searched for any study evaluating eye health not limiting the sources to population-based data. The search combined terms related to three concept areas: population (Indigenous), outcome (vision impairment/blindness and ocular findings), and study site (the Americas). Term selection was based on previous systematic reviews and combined key terms adapted for each database and medical subject headings (MeSH) as applicable. We searched for studies in any language, indexed from 1 January 2000 to 1 November 2022. We screened the selected papers in terms of [1] reporting frequencies of vision impairment or blindness or frequencies of ocular diseases; [2] reporting results for an indigenous population; and [3] reporting data from populations resident in any region of the Americas. We excluded articles that did not include an Indigenous group, were iterations, were program evaluations or experimental studies, not primary studies, were from the gray literature, or used identical data sources as prior studies. Because many studies on Indigenous Peoples have not reported response rates, we did not impose any minimum response rate limit. Self-reported outcome data were not included. The following information was extracted from each selected study: author, year of publication, country, Indigenous group, study design, sample size, individuals age, main outcome, method for visual acuity and definitions for vision impairment and blindness, frequency of vision impairment and blindness, and/or frequency of ocular diseases. The results were presented separately according to the GBD regions classification (Table 1). We presented the results as descriptive tables for frequencies of vision impairment and blindness and for frequencies of ocular diseases in the population. As most of the studies adopted different criteria for definitions of vision impairment and blindness and varied the measurement method (i.e., uncorrected, presenting vision, and best-corrected vision acuity), we could not standardize estimates and summarize the findings per region and therefore presented descriptive data along with the specificities of each estimate. ## 3. Results The database search yielded 2829 citations, of which 2747 were excluded. We screened the full texts of 82 records for relevance and excluded 16. The remaining 66 articles were examined thoroughly, and 25 presented sufficiently data to be included in the current review. Another 7 articles derived from references were included, summing a total of 32 studies selected. Figure 1 shows the flowchart of records selection. Out of the 32 selected studies, 14 ($43.75\%$) were conducted in tropical Latin America (13 in Brazil and 1 in Paraguay), 12 ($37.50\%$) in high-income North America (8 in the USA and 4 in Canada), 4 (12.50) in central Latin America (2 in Colombia, 1 in Mexico, and 1 in Venezuela), 1 ($3.12\%$) in Andean Latin America (Ecuador), and 1 ($3.12\%$) in the Caribbean (Haiti). No studies from southern Latin America were included. A total of 11 studies ($34.37\%$) reported frequencies of vision impairment and blindness, with most of them from high-income North America. No studies from Andean Latin America, the Caribbean, or southern Latin America presented data on vision impairment and blindness. A great variability of vision acuity measurement methods, as well as vision impairment and blindness definitions, was observed. Table 2 shows the frequencies of vision impairment and blindness according to the GBD region along with study population Indigenous group and age, and categories’ definitions. Despite the differences in the vision impairment and blindness definitions, it is a clear significant difference in the frequencies between high-income North America and tropical Latin American countries. When considering adults over 40 years old and the BCVA method, the highest frequencies of vision impairment and blindness in high-income North *America sum* $11.1\%$ [27] while in the tropical Latin America it can reach $28.5\%$ [19]. A total of 26 studies ($81.25\%$) reported frequencies of ocular diseases, with most of them from tropical Latin America and high-income North America. Trachoma was the main condition evaluated, discussed in nine studies ($34.61\%$), with six in tropical Latin America and three in Central America. Cataract was evaluated in seven studies ($26.92\%$), three in high-income North America, three in tropical Latin America, and one in the Caribbean. Interestingly, the six studies evaluating diabetic retinopathy ($23.07\%$) were from high-income North America. Pterygium was evaluated in five studies ($19.23\%$), with four from tropical Latin America and one from the Caribbean. Table 3 shows the frequencies of ocular diseases according to the GBD region along with study population Indigenous group and ages. ## 4. Discussion This study presents an overall panorama of the ocular health in Indigenous Peoples in the America. The main limitation, however, is the shortage of data. The low number of records retrieved from our literature review reflects the scarcity of studies focused on eye health in Indigenous populations in the Americas. Out of the 33 countries in the Americas, only 7 ($21\%$) had data on vision impairment/blindness and/or ocular disease in Indigenous groups. The lack of studies is particularly more evident in Andean Latin America, where a high percentage of the population self-identify as Indigenous and yet is underrepresented [14]. No studies were found for southern Latin America, which is the sub-region with the lowest frequencies of Indigenous Peoples in the general population. The most recent worldwide estimates of vision impairment and blindness, however, have included data from most countries in the Americas, indicating availability of population-based surveys and therefore reinforcing the misrepresentation of Indigenous Peoples in these calculations. While part of these studies might have included Indigenous groups in their samples, most of them have used the RAAB (Rapid Assessment of Avoidable Blindness) methodology, which is a format that does not disaggregate information on ethnicity further limiting the analysis of burden of disease in Indigenous populations specifically and the comparisons between Indigenous and non-indigenous groups [50]. Most studies on frequency of vision impairment and blindness were conducted in high-income North America. According to the GBD, the prevalence of moderate to severe vision impairment (MSVI: VA < $\frac{20}{63}$ to VA ≥ $\frac{20}{400}$) and blindness (VA < $\frac{20}{400}$) in the general population aged 50 years and older in the region was $3.28\%$ and $0.40\%$, respectively [1]. Despite the different criteria for classification, the frequency of vision impairment and blindness in the Indigenous populations evaluated were higher than those presented by the GBD, with values in older adults ranging from $3.10\%$ [28] to $12.80\%$ [25] for vision impairment and $0.30\%$ [28] to $1.90\%$ [27] for blindness. Tropical Latin *America is* one of the sub-regions with the highest estimated rates of MSVI ($10.60\%$) and blindness ($2.71\%$) in older adults in the Americas [1]. A recent study performed with residents from the Xingu Indigenous Park in Brazil following the same GDB criteria of classification has shown frequencies of MSVI and blindness substantially higher than those calculated for the general population, reaching $22.58\%$ and $5.92\%$, respectively, in adults 45 years and older [19]. The only study from central Latin America evaluated individuals 20 years and older in Mexico and found a prevalence of presenting vision acuity <$\frac{20}{60}$ in $10\%$ of the population [18]. The estimates for MSVI and blindness considering best-corrected vision acuity in adults 50 years and older in the region were $10.70\%$ and $1.83\%$ [1], but due to the different criteria of measurement and definitions, we are not able to make direct comparisons. *The* general estimates of vision impairment and blindness for Andean Latin America (MSVI: $13.00\%$; blindness: $2.20\%$), the Caribbean (MSVI: $8.22\%$; blindness: $1.74\%$), and southern Latin American (MSVI: $6.59\%$; blindness: $0.66\%$) could not be compared to Indigenous Peoples due to the lack of studies on these groups in those specific countries [1]. In 2020, cataract and under-corrected refractive error composed $50\%$ of all global blindness and $75\%$ of all global MSVI [2]. Other causes included glaucoma, age-related macular degeneration, and diabetic retinopathy, being the five conditions mostly studied in the general population. Diabetic retinopathy was the smallest contributor to blindness in 2020 among those, however, it was the only cause of blindness that showed a global increase in prevalence from 1990 to 2020, particularly in the high-income North America sub-region [2]. While the data retrieved from studies using Indigenous populations cover extensive age ranges and do not necessarily represent the disease frequency or the cause of MSVI and blindness, a differential pattern of disease focus is observed among the sub-regions. While $66.7\%$ of the studies from high-income North America have presented data on diabetic retinopathy, none of the studies from the other region have evaluated this condition. The cataract rates in older adults, regardless of vision acuity status, have varied from $12.2\%$ in Northwestern and Alaskan Natives in the USA [28] to $54.5\%$ in groups from the Xingu Indigenous Park in Brazil [19]. These values are sensitive to the population’s access to cataract surgeries, which may explain the high frequency of disease in Indigenous populations with limited access to specialized eye health services. Few studies evaluated refractive errors, with rates reaching up to $62\%$ in Brazilian communities [43]. The effective cataract surgical coverage (eCSC) and the effective refractive error coverage (eREC) are indicators requested by the WHO in order to meet the 2030 Sustainable Development Goals [51]. eCSC refers to the proportion of people who have received cataract surgery and have a resultant good quality outcome relative to the number of people in need of cataract surgery [52]. Similarly, eREC refers to the proportion of people who have received refractive correction and have a resultant good quality outcome relative to the number of people in need of refractive correction [53]. These indicators are ideal to not only track changes in the uptake and quality of eye care services, but also to contribute to monitoring progress towards universal health care in general [54]; however, none of the studies using Indigenous populations in the Americas have reported eCSC or eREC. A previous analysis of *Indigenous versus* non-indigenous groups in Australia has shown that eCSC was significantly better in non-indigenous Australians than in Indigenous Australians ($88.5\%$ vs. $51.6\%$) [55]. Pterygium is a condition commonly evaluated in the studies as its occurrence is associated with geographic locations characterized by low latitude and high ultraviolet exposure. In that sense, studies from the Caribbean, central and tropical Latin America have reported frequencies from $12.8\%$ [40] to $27.1\%$ [43]. The population profile is a determinant for pterygium development, so people who have an outdoor lifestyle tend to be more likely to develop the disease due to the direct UV exposure. The disease is also highly prevalent in non-indigenous populations in equatorial areas with prevalence reaching up to $58.8\%$ [56]. Ocular infectious diseases are highly associated with living style, access to clean water, and basic sanitation, and therefore can be highly prevalent in Indigenous communities [57]. Trachoma and onchocerciasis were evaluated in $73\%$ of the studies from central and tropical Latin America reflecting the concern about such conditions in these regions. Onchocerciasis was identified in two studies in Brazil, affecting up to $68.6\%$ of a Yanomami community [35]. Trachoma was identified in both central and tropical Latin America with frequencies ranging from $6.9\%$ [31] to $41.8\%$ [34]. Moreover, one study in Brazil evaluated parasitic keratitis in Arawak, Tukano, and Maku peoples finding a frequency of $17.2\%$ [43]. Historically, onchocerciasis was formerly prevalent in 13 foci in Brazil, Colombia, Ecuador, Guatemala, Mexico, and Venezuela [58]. In response, the Pan American Health Organization (PAHO) established the Onchocerciasis Elimination Program for the Americas (OEPA) in 1992 with the main purpose to guide countries to achieve the goal of eliminating onchocerciasis in Latin America [59]. *In* general, the strategy included six-monthly mass administration of ivermectin (Mectizan®, Merck & Co. Inc., Rahway/NJ, USA) with coverage equal to or higher than $85\%$ of the eligible population [59]. The onchocerciasis elimination program in Latin American countries has been ongoing since 1996 [60]. To date, onchocerciasis transmission has been eliminated from 11 of the 13 previously endemic disease foci in Latin America, and four out of six endemic countries have been verified as eliminated by PAHO (Colombia, Ecuador, Guatemala, and Mexico) [61]. Trachoma is the world’s leading infectious cause of blindness and is endemic in several parts of the world [62]. Mexico was the first country in the Americas to eliminate trachoma as a public health problem, as validated by PAHO in 2017, but this is still a concern in four countries in Andean, central, and tropical Latin America: Brazil, Colombia, Guatemala, and Peru [63,64]. PAHO/WHO support countries to implement the SAFE strategy (i.e., surgery, antibiotics, facial cleanliness, and environmental improvement), a program that consists of surgery to treat advanced trachoma (trichiasis), antibiotics (azithromycin) to clear the agent of infection, facial hygiene, and environmental improvements to reduce transmission from one person to another [63]. While the strategy adherence might be more challenging in Indigenous communities, the example from Mexico reinforces the importance of partnership with local leader authorities who will enhance the population’s trust in the program and improve the outcomes [64]. Other conditions observed in the reviewed studies include glaucoma and under-corrected refractive errors. Glaucoma was present in a relatively small proportion of the populations of Brazil and the USA but at a high frequency of $19.1\%$ in Haiti [30]. The high frequency of glaucoma in Haiti could be influenced by the nonvariation in race and the higher environment temperature [30,65]. *In* general, the high rates of cataract and under-corrected refractive errors reflect the poor access of the Indigenous populations to specialized care. The access is likely associated with education and economic status, which are factors that could not be evaluated in the current revision due to the lack of information in the selected studies [66,67]. There are significant disparities in the number and distribution of ophthalmologists in American countries as they tend to be concentrated in more developed cities, leaving remote areas, where most Indigenous Peoples are concentrated, with a low density of ophthalmologists [66]. Due to a lack of access to and utilization of eye care services, Indigenous Peoples in the Amazon may combine several social determinants of blindness and visual impairment, such as ethnicity, place of residence (rural remote areas), socioeconomic status (poverty), and education (low levels of schooling). In Guatemala, with a high percentage of Indigenous population and high prevalence of blindness [67], the determinant “place of living” might not be as important as in the Amazon, but others are present among Indigenous groups. More recently, social, political, and economic crises have motivated intense migratory movements and refugee requests in Latin American countries, with an increasing number of Indigenous individuals living in public or self-managed shelters or even on the street in extreme poverty. These conditions represent an extra challenge to address, not only visual, but the general health care needs of such groups [68,69,70]. Improving Indigenous eye health in the *Americas is* particularly challenging and mainly due to limited access and inequalities in care. More than achieving universal health coverage in a country, equity should be prioritized, otherwise, socially advantaged groups will be more likely to use the new or improved services [71,72]. Specific actions include the following: [1] access: increasing the number of clinic sites, rural locations, and eye care sessions, not only with ophthalmologists, but also with other eye health practitioners as optometrists, ophthalmic technologists, and/or trained nurses should improve the number of patient seen, dispensing spectacles, and surgery referrals [72,73]; [2] integration with family medicine/primary care: several communities have general health programs with systemic condition screening and could include ocular health screening tools into their practice to detect and timely refer cases of vision impairment and blindness for specialized care [19,72,74]; [3] telemedicine: several telemedicine protocols in ophthalmology focused on diabetes retinopathy, glaucoma, and cataract have been shown to be effective in populations living in remote areas and should be used as models towards Indigenous population groups [75,76,77]; [4] customized propaedeutics: specific techniques should be indicated to populations living in remote areas, for example, manual small incision cataract surgery (MSICS) techniques in resource-constrained health care settings such as Indigenous communities [78]; [5] education on eye health: by promoting basic knowledge on eye health, the population can better understand the importance of seeking timely treatment, improving visual outcomes [79,80]; [6] quality data: more studies focused on Indigenous population’s eye health should be performed with appropriate methodology and collection of key indicators such as eCSC and eREC, and studies performed in the general population should collect data on the participants’ ethnicity/race [52,53]. ## 5. Conclusions Despite the shortage of data, our findings show a higher frequency of vision impairment and blindness in the Indigenous population when compared to worldwide estimates for all sub-regions in the Americas. Most of the ocular diseases reported are preventable and/or treatable, so blindness prevention programs should focus on accessibility to eye examinations, cataract surgeries, control of infectious diseases, and spectacles distribution. Finally, more epidemiological studies with Indigenous populations using higher methodologic quality and consistent indicators are recommended in order to understand the burden of diseases and optimize developed programs focused on these groups.
# Factors Associated with Impaired Resistive Reserve Ratio and Microvascular Resistance Reserve ## Abstract Coronary microvascular dysfunction (CMD) is described as an important subset of ischemia with no obstructive coronary artery disease. Resistive reserve ratio (RRR) and microvascular resistance reserve (MRR) have been proposed as novel physiological indices evaluating coronary microvascular dilation function. The aim of this study was to explore factors associated with impaired RRR and MRR. Coronary physiological indices were invasively evaluated in the left anterior descending coronary artery using the thermodilution method in patients suspected of CMD. CMD was defined as a coronary flow reserve <2.0 and/or index of microcirculatory resistance ≥25. Of 117 patients, 26 ($24.1\%$) had CMD. RRR (3.1 ± 1.9 vs. 6.2 ± 3.2, $p \leq 0.001$) and MRR (3.4 ± 1.9 vs. 6.9 ± 3.5, $p \leq 0.001$) were lower in the CMD group. In the receiver operating characteristic curve analysis, RRR (area under the curve 0.84, $p \leq 0.001$) and MRR (area under the curve 0.85, $p \leq 0.001$) were both predictive of the presence of CMD. In the multivariable analysis, previous myocardial infarction, lower hemoglobin, higher brain natriuretic peptide levels, and intracoronary nicorandil were identified as factors associated with lower RRR and MRR. In conclusion, the presence of previous myocardial infarction, anemia, and heart failure was associated with impaired coronary microvascular dilation function. RRR and MRR may be useful to identify patients with CMD. ## 1. Introduction The traditional understanding is that epicardial coronary artery disease (CAD) plays a major role in ischemic heart disease, although previous registry data showed that only less than one-half of patients suspected of angina had significant lesions in epicardial coronary arteries [1,2]. In this context, ischemia with no obstructive CAD (INOCA) has been increasingly recognized as a major etiology of ischemic heart disease [3,4], in which coronary microvascular dysfunction (CMD) and vasospastic angina are described as important subsets of INOCA in the expert consensus document [5]. Since CMD reportedly deteriorates a patient’s quality of life and prognosis [3,6], accurate identification and diagnosis are clinically relevant. Coronary flow reserve (CFR), which is the ratio of hyperemic to resting blood flow, represents coronary blood flow capacity including epicardial coronary arteries and microvasculature to accommodate an increasing demand for oxygen at excise or stress [7]. Since reduced CFR indicates the presence of CMD when no significant epicardial CAD exists, the recent European and American guidelines recommend the measurement of CFR in patients suspected of INOCA [8,9]. CFR relies on resting flow for the calculation, and, thus, hemodynamic perturbation including a change in heart rate, blood pressure, and left ventricular contractility affects CFR value [10]. Recently, resistive reserve ratio (RRR) and microvascular resistance reserve (MRR) have been proposed as novel physiological indices to represent coronary microvascular dilation function [11,12]. Given that these indices take into account the information on coronary pressure as well as flow [11,12], RRR and MRR may better estimate coronary microvascular function as compared with CFR. Indeed, previous reports showed that RRR was superior to CFR in predicting future cardiovascular events in patients with CAD [13,14]. However, data are scarce on factors related to RRR and MRR. The aim of the present study was to explore factors associated with impaired RRR and MRR. ## 2.1. Study Population This was a retrospective, single-center study at Chiba University Hospital. Between July 2020 and June 2022, a wire-based coronary physiological assessment was conducted on 117 patients who were suspected of having CMD due to their chest pain with no apparent epicardial CAD. The invasive physiological assessment was performed in the LAD. Patients with a physiological assessment in a nonelective setting (i.e., acute coronary syndrome) ($$n = 5$$) and missing data ($$n = 4$$) were excluded. In addition, patients with angiographically significant epicardial CAD (percentage of diameter stenosis on visual assessment >$50\%$) in the LAD were also excluded. Thus, a total of 108 patients were included in the present analysis. This study was done in accordance with the Declaration of Helsinki. The ethics committee of the Chiba University Graduate School of Medicine approved this study (Approval number: M10348, date: 27 July 2022). Informed consent was obtained in the form of opt-out. ## 2.2. Invasive Coronary Physiological Assessment The invasive diagnostic procedure is schematized in Figure 1. After the administration of intracoronary isosorbide dinitrate, a coronary angiography was performed per local standard practice [15,16]. In the present study, wire-based invasive coronary physiological indices were measured by the bolus-saline injection thermodilution method using a 6 Fr guiding catheter with no side holes [17,18]. After equalization, the pressure sensor guidewire (PressureWire X, Abbot Vascular, Santa Clara, USA) was advanced into the distal third of the LAD, and 3 milliliters of room-temperature saline were injected into the LAD at 3 times, automatically calculating mean transit time (Tmn) with a dedicated system (CoroFlow system, Coroventis Research, Uppsala, Sweden). Simultaneously, mean aortic pressure (Pa) and distal coronary pressure (Pd) were measured. Maximum hyperemia was induced by intracoronary administration of papaverine (12 mg) or nicorandil (2 mg) [16,19]. All indices of coronary pressure and flow (i.e., Tmn) were measured at resting and hyperemic conditions. Multiple coronary physiological indices were evaluated in this study as follows: the ratio of Pd to Pa (resting Pd/Pa), fractional flow reserve (FFR), baseline resting index (BRI), index of microcirculatory resistance (IMR), CFR, RRR, and MRR, all of which were calculated using Pa, Pd, and Tmn at rest and hyperemia. FFR was defined as Pd/Pa at hyperemia. BRI and IMR, both of which represent a coronary microvascular tone, were defined as Pd multiplied by Tmn at resting and hyperemic conditions, respectively [13,14,20,21]. CFR was defined as resting Tmn divided by hyperemic Tmn. RRR, the ratio of microvascular tone at rest to that at hyperemia was defined as follows: RRR = BRI/IMR = (resting Pd × resting Tmn)/(hyperemic Pd × hyperemic Tmn) = CFR × (resting Pd/hyperemic Pd) [13,14]. In the present study, MRR was calculated by using indices obtained by a bolus-saline thermodilution method, rather than measured by absolute coronary blood flow using a continuous-saline thermodilution method. The definition of MRR was as follows: MRR = CFR × (resting Pa/hyperemic Pd) = (CFR/FFR) × (resting Pa/hyperemic Pa) = RRR × (resting Pa/resting Pd) [12,22]. The cut-off values for abnormal FFR, IMR, and CFR were determined as ≤0.80, ≥25, and <2.0, respectively [8,9]. In the present study, patients with abnormal CFR and/or IMR (i.e., CFR < 2.0 and/or IMR ≥ 25) were defined as having CMD [5,8,9]. ## 2.3. Endpoints and Statistical Analysis The primary interest of this study was to explore factors associated with impaired (i.e., lower) RRR and MRR. All statistical analyses were performed using JMP pro version 16.0 (SAS Institute Inc., Cary, CA, USA). Continuous variables were expressed as mean ± standard deviation and compared with the Student t-test. Categorical variables were expressed as frequency (%) and assessed with Fisher’s exact test. The normal distribution was visually evaluated with histograms. Due to the skewed distribution, a log transformation was performed to assess the level of brain natriuretic peptide (BNP). The receiver operating characteristic (ROC) curve analyses were performed to assess the best cut-off value of RRR and MRR for predicting CMD. Univariable and multivariable linear regression analyses were performed to explore factors related to coronary physiological indices. In the regression models, we included variables reportedly affecting coronary physiological statuses such as age, sex, body mass index, diabetes, hypertension, previous myocardial infarction (MI), renal function assessed with estimated glomerular filtration rate, anemia evaluated with a hemoglobin level, heart failure estimated by log-transformed BNP, and hyperemic agent (i.e., intracoronary papaverine versus nicorandil) [23,24,25,26,27,28,29,30,31,32]. The results of the regression analysis are displayed in a heat map. As a sensitivity analysis, the univariable and multivariable linear regression analyses were performed after excluding cases in which intracoronary nicorandil was used to achieve maximum hyperemia. A value of $p \leq 0.05$ was considered statistically significant. No corrections for multiple comparisons were performed. ## 3. Results Of the 108 patients, 26 ($24.1\%$) had CMD (Table 1). Baseline characteristics between patients with and without CMD are summarized in Table 1. Patients with CMD were more likely to be women, while other characteristics were similar between the two groups (Table 1). Coronary physiological findings are shown in Table 2. To archive maximum hyperemia, intracoronary papaverine, and nicorandil were used in $62.0\%$ and $38.0\%$, respectively. The use of nicorandil was more frequent in women than in men ($79.3\%$ vs. $22.8\%$, $p \leq 0.001$). FFR, BRI, and IMR were significantly higher and CFR, RRR, and MRR were lower in patients with CMD than those without (Table 2). The ROC curve analyses showed that RRR and MRR were both predictive of the presence of CMD (Figure 2). With the best cut-off value, the sensitivity, specificity, positive and negative predictive values, and diagnostic accuracy of RRR ≤ 3.4 and MRR ≤ 3.7 for CMD were 0.77, 0.84, 0.61, 0.92, and 0.82, and 0.77, 0.87, 0.65, 0.92, and 0.84, respectively. In the univariable analysis, female gender, the presence of previous MI, a lower hemoglobin level, higher log-transformed BNP, and intracoronary nicorandil as a hyperemic agent were significantly associated with lower RRR and MRR (Figure 3). Multivariable analysis indicated previous MI, a lower hemoglobin level, higher log-transformed BNP, and intracoronary nicorandil as predictors of lower RRR and MRR (Figure 4). When excluding cases in which intracoronary nicorandil was used to achieve maximum hyperemia (Table 3 and Table 4), the overall results were similar (Figure 5, Figure 6 and Figure 7). ## 4. Discussion The present study demonstrated that among patients suspected of CMD, approximately one quarter had invasively assessed CFR <2.0 and/or IMR ≥25. Patients with CMD had a lower CFR, RRR, and MRR than those without. Multivariable analysis identified previous MI, anemia, and heart failure as factors associated with impaired RRR and MRR. To our knowledge, this is the first report exploring predictors of the novel indices, RRR and MRR, for evaluating coronary microvascular dilation function. ## 4.1. RRR and MRR Recently, INOCA has been of clinical interest, in which CMD is the main subset [5]. Since invasive identification and subsequent medical therapy were shown to improve the quality of life in patients with INOCA [33,34], an accurate diagnosis is clinically relevant. Although CFR (<2.0) and IMR (≥25) are suggested to define INOCA in the guidelines [8,9], whether the two indices can accurately identify patients with CMD remains uncertain. CFR is affected by hemodynamic perturbation such as a change in heart rate, blood pressure, and left ventricular contractility [10], and IMR is influenced by the amount of myocardium subtended to the location of the pressure-temperature sensor [35]. To overcome the limitations of CFR and IMR measurement, recently emerged RRR and MRR may be useful for evaluating coronary microvascular dilation function. While previous studies showed that only one physiologic index, including FFR, CFR, or IMR, was unable to fully discriminate patients at higher risks of clinical events, RRR is an integrated physiologic index of both coronary flow and pressure, potentially resulting in better risk stratification in CMD [13,14]. In fact, a previous single-center study ($$n = 1692$$) showed that RRR (mean value 2.88) was useful to stratify risks for all-cause mortality in patients with angina or ischemia and nonobstructive CAD, with the best cut-off value of 2.62 [14]. Another patient-level pooled cohort in Korea, Japan, and Spain demonstrated that lower RRR was associated with worse clinical outcomes in a stepwise manner and that even in patients with preserved FFR (>0.80) and CFR (>2.0), lower RRR (<3.5) was related to an increased risk of patient-oriented composite outcomes during the long-term follow-up [13]. The cut-off (median) value for predicting outcomes suggested in the pooled data (i.e., 3.5) was in line with that for the presence of CMD in the present study (i.e., 3.4), although RRR in the present study was numerically higher than that of previous studies [13,14]. In the prior pooled data, >$30\%$ of patients had CFR <2.0 [13], whereas approximately $10\%$ did in the present study, suggesting that our study cohort represented relatively preserved coronary microvascular function. MRR was originally developed as the index measured by absolute coronary blood flow with a continuous-saline thermodilution method using a dedicated microcatheter [36]. MRR is conceptually specific for microcirculation and independent of myocardial mass [12]. Although MRR was calculated by using indices obtained with a bolus-saline thermodilution method in the present study, it has the potential to avoid influence with epicardial CAD and the amount of myocardium [12]. The suggested cut-off value of MRR for the presence of CMD in this study (i.e., 3.7) was slightly higher than that of RRR, which may be reasonable due to the calculation formula (i.e., MRR = RRR × [resting Pa/resting Pd]) [12,22]. Given that CFR, RRR, and MRR were all significantly lower in patients with CMD than those without, multiple physiological assessments can aid in identifying patients with CMD. Further studies are needed to elucidate the cut-off values of RRR and MRR and whether the novel indices are superior to conventional invasive indices such as CFR and IMR in estimating coronary microvascular function. ## 4.2. Factors Associated with RRR and MRR It is conceivable that CMD, greater resting coronary blood flow, or both result in impaired microvascular dilation response (i.e., RRR and MRR) [13,14], which are reportedly associated with several clinical and procedural factors. For instance, FFR was preserved while IMR, CFR, RRR, and MRR were more impaired in women than in men in the univariable analysis in the present study, probably due to the longer hyperemic Tmn (slower coronary blood flow) (Figure 3). However, previous studies showed that women had lower CFR, with a shorter resting Tmn (faster coronary blood flow) [37,38]. The longer hyperemic Tmn in women may be confounded with the higher likelihood of nicorandil use as a hyperemic agent. Indeed, when excluding cases in which maximum hyperemia was achieved by intracoronary nicorandil, the female gender was no longer associated with lower CFR, RRR, and MRR in both univariable and multivariable analyses (Figure 6 and Figure 7). Women are likely to have impaired CFR, though the underlying mechanisms remain unclear. Additionally, a previous study in which prognostic implications of RRR were evaluated in patients with INOCA showed that the rate of women was higher in patients with reduced RRR (<2.62) than in their counterparts [14]. In the multivariable adjustment with hemoglobin and BNP levels, female gender was no longer a significant factor associated with CFR, RRR, and MRR in the present study, suggesting that anemia may play a role in a higher likelihood of CMD in women. Apart from gender differences, several patient characteristics such as older age and the presence of diabetes are known to be associated with impaired CFR [24,39]. A recent retrospective study showed that MRR was significantly lower in diabetic patients with suspected angina and nonobstructive CAD than those without diabetes [31], and diabetes was also reportedly associated with lower RRR [13,14]. Although the present study did not show the direct relation of diabetes to CFR, the multivariable analysis indicated that patients with diabetes had nonsignificantly lower RRR and MRR. In this study, a multivariable analysis identified previous MI, anemia, and heart failure as factors associated with impaired RRR and MRR. In a recent prospective study in which invasive measures of coronary microvascular function such as CFR and IMR were repeatedly evaluated in patients undergoing primary percutaneous coronary artery intervention for ST-segment-elevation MI, IMR remained high (i.e., 25.6 ± 17.8) at one month after the index event [40]. Patients with a history of MI are likely to have coronary arteriosclerosis and impaired microvascular function [41], probably resulting in lower RRR and MRR. The increased resting and impaired hyperemic coronary blood flow in patients with anemia and heart failure were reported in previous investigations, as shown in the present study [42,43,44], supported by the fact that lower hemoglobin and higher BNP levels were associated with a shorter resting Tmn and BRI in the univariable models (Figure 3). It is conceivable that the increased resting coronary blood flow reflected a patient condition where hyperemic status, at least partially, was achieved even at rest, preventing “additional” maximum hyperemia by intracoronary administration of papaverine and nicorandil. Although intracoronary nicorandil is safe and effective to induce hyperemia [19], an achievable hyperemic effect by intracoronary papaverine may be greater as compared with nicorandil [32], leading to the significant influence of different hyperemic agents (i.e., papaverine vs. nicorandil) on RRR and MRR. In previous reports, a hyperemic effect of intracoronary papaverine is induced earlier and lasts longer than that of nicorandil [45,46]. However, when excluding cases in which nicorandil was used for inducing maximum hyperemia, the overall results were similar. Thus, we believe that the presence of previous MI, anemia, and heart failure may be significant predictors of impaired RRR and MRR. To estimate whether a patient has CMD in clinical practice, these factors may be taken into account. ## 4.3. Study Limitations There were some limitations in the present study. This was a retrospective, single-center, observational study, and the sample size was modest. The number of patients included in this study may be acceptable to perform the multivariable analyses [47], however, a larger sample size would be preferred. Although the present study included patients suspected of CMD, only one quarter had CFR <2.0 and/or IMR ≥25. Noninvasive stress tests to evaluate myocardial ischemia were not performed in a uniform manner and thus, the data were not available. Different hyperemic agents, such as intracoronary papaverine, nicorandil, intravenous adenosine, and adenosine triphosphate reportedly have different characteristics in safety, efficacy, and availability in real-world clinical practice. The decision of physiological measurement and the selection of hyperemic agent were left to the operator′s discretion. Even though the sensitivity analysis confirmed similar results between the entire study population and cases in which intracoronary papaverine was used to achieve maximum hyperemia, a selection bias is possible. In this study, we estimated MRR by using a bolus-saline thermodilution method rather than using a continuous-saline thermodilution method as done in previous reports [12,22]. ## 5. Conclusions Coronary microvascular dilation function assessed with RRR and MRR was impaired in patients with CMD, both of which may help estimate coronary microvascular function. The presence of previous MI, anemia, and heart failure were identified as factors associated with lower RRR and MRR. The clinical usefulness of RRR and MRR beyond conventional physiological indices such as CFR and IMR deserves further investigation.
# Impact on Glycemic Variation Caused by a Change in the Dietary Intake Sequence ## Abstract This work presents an analysis of the effect on glycemic variation caused by modifying the macronutrient intake sequence in a person without a diagnosis of diabetes. In this work, three types of nutritional studies were developed: [1] glucose variation under conditions of daily intake (food mixture); [2] glucose variation under conditions of daily intake modifying the macronutrient intake sequence; [3] glucose variation after a modification in the diet and macronutrient intake sequence. The focus of this research is to obtain preliminary results on the effectiveness of a nutritional intervention based on the modification of the sequence of macronutrient intake in a healthy person during 14-day periods. The results obtained corroborate the positive effect on the glucose of consuming vegetables, fiber, or proteins before carbohydrates, decreasing the peaks in the postprandial glucose curves (vegetables: 113–117 mg/dL; proteins: 107–112 mg/dL; carbohydrates: 115–125 mg/dL) and reducing the average levels of blood glucose concentrations (vegetables: 87–95 mg/dL; proteins: 82–99 mg/dL; carbohydrates: 90–98 mg/dL). The present work demonstrates the preliminary potential of the sequence in the macronutrient intake for the generation of alternatives of prevention and solution of chronic degenerative diseases, improving the management of glucose in the organism and permeating in the reduction of weight and the state of health of the individuals. ## 1. Introduction Consumption of food in human beings is an activity that provides the nutrients necessary for the adequate performance of the organism and prevents various diseases such as diabetes and cardiovascular diseases [1]. Consumption of any food generates elevations in glucose levels, defined as “glucose curves,” because of a gradual elevation in blood glucose levels that is subsequently attenuated due to the homeostatic processes of glucose in the organism throughout time [2]. This period is called the postprandial glucose stage, which lasts 4–5 h for each meal taken [3]. Information on the magnitude, fluctuations, and different characteristics of glucose curves (peaks, plateaus, rise and decay times) is defined as “glycemic variability” [4], which has taken on great relevance for the generation of actions toward the development of nutritional interventions. Prolonged postprandial glucose episodes and their high frequency generate one of the main risk factors for developing Type 2 Diabetes Mellitus (T2DM) [5,6] since the average blood sugar level throughout the day is above the basal glucose levels. T2DM is a multifactorial disease occurring in the adult population [7], where the main characteristic is the presence of elevated blood glucose levels throughout the day (episodes of hyperglycemia) [8]. High glucose levels are mainly related to insulin resistance, a condition in which the different cells of the organism cannot assimilate the insulin hormone adequately. This condition leads to an increase in insulin secretion by the pancreas, which in turn facilitates the presence of hyperglycemia in the organism due to the low absorption of insulin by the cells [9]. This set of diseases is usually the product of carrying out many habits detrimental to health throughout the person’s life before being diagnosed with T2DM [10]. T2DM is a chronic degenerative disease that leads to a degradation in the quality of life and facilitates the presence of cardiovascular complications [11] and renal, ocular, and liver diseases [12], these being a small part of the set of conditions related to T2DM. It should be noted that because of the disease control actions by the health sector, there is an added problem [13,14], ranging from the viewpoint of proper care for the community [15] to the economic aspect where the investment required to satisfy this need is growing every year [16,17]. Therefore, several alternatives have been developed to prevent the condition, some of them focused on informing society about how a healthy diet and regular physical activity reduce the chances of developing T2DM [18]. Diet is the main factor in the increase in glucose levels, so it is essential to control how we consume food. Several studies have addressed this problem, thus generating alternative solutions to reduce the impact of postprandial glucose, such as Dahl et al. [ 19], who demonstrated how semaglutide has beneficial effects on the reduction of postprandial glucose, triglycerides, glucagon, and gastric emptying in people with T2DM. Rayner et al. [ 20] similarly demonstrate the effects of lixisenatide in reducing gastric emptying by promoting postprandial glucose dynamics. Vlachos et al. [ 21] present an in-depth review of the subject, concluding that reducing carbohydrates in conjunction with a higher fiber intake positively affects postprandial glucose reduction. How different macronutrients are ingested affects the glycemic variability of organisms, modifying the time to glucose elevation, the glucose curve magnitude, and the glucose decay time [22,23]. Sun et al. [ 24] developed a study of 16 healthy people in which the effect of the proper order for macronutrient intake on glycemic variation was evaluated. The study showed that consuming vegetables followed by proteins and concluding with carbohydrates is an effective strategy to reduce postprandial glucose and prevent the generation of T2DM. Kubota et al. [ 25] corroborate that the correct order of the food sequence reduces the episodes of postprandial hyperglycemia, improving weight loss and metabolic function. Considering the alternatives presented in the various sources of information and considering that those related works generate an analysis on the modification of the sequence of macronutrient intake concerning tests of about 4 to 5 h of glucose monitoring, the main objective of this work is to obtain preliminary results on the effectiveness of a nutritional intervention based on the modification of the order of macronutrient intake in a healthy person. The particularity of this work is to generate continuous glucose measurements during three periods of 14 days in an individual for whom three types of nutritional interventions were developed. In this way, difficult-to-access information is obtained on the behavior of glucose curves derived from the proposed nutritional intervention. Considering the scope of the present work, this research is a first approximation for the development on a larger scale of nutritional interventions focused on the modification of the macronutrient sequence that allows the reduction of postprandial glucose levels in healthy people. This way, the necessary conditions are obtained to carry out this experimentation on a larger scale. The results of this research will allow the generation of information for the development of alternative solutions in the generation of metabolic and chronic degenerative diseases. ## 2.1. Quasiexperimental Study Design The study methodology consists of the steps described below:Generation of data: A series of body measurements, an indirect calorimetry test, and the development of a food reminder were developed in the participant to have an approximation of the nutritional status of the participant and to be able to propose the type of interventions in the sequence of macronutrient intake to follow so that there is no decompensation in the current type of food intake. Implantation of the continuous glucose monitoring sensor: In each test, a new interstitial glucose sensor was implanted to generate data on glucose dynamics. Daily diet (Test 1): Subsequently, Test 1 was developed, where glucose measurements were generated and focused on describing the variation of glucose levels in the face of the study subject’s daily diet (mixture of macronutrients without having any order in food consumption).Regular diet with ordered consumption of macronutrients (Test 2): Test 2 has the objective of obtaining the glucose dynamics when the sequence in the order of macronutrient consumption is modified without generating any change in the participant’s regular diet. Assisted diet with ordered consumption of macronutrients (Test 3): This test consists of generating measurements of glucose variation in the face of a modification in the participant’s daily diet considering the change in the sequence of macronutrient intake. Statistical analysis: Once the three different tests were generated, a statistical analysis of the results obtained was generated, which was the study’s core. In this analysis, the impact of the sequence in macronutrient consumption was quantified and contrasted concerning the postprandial glucose curves generated in each dietary intake. For this purpose, the proportions of macronutrients consumed per intake were used and related to glucose concentrations, magnitudes of the postprandial glucose peaks, and times in which postprandial glucose stabilizes. ## 2.2. Ethics of Research In the research developed, the health and integrity of the participant were not put at risk in any way, being an observational experiment. Each of the procedures developed was evaluated and authorized by the Ethics Committee of the Faculty of Medicine of the Autonomous University of the State of Morelos (CONBIOETICA-17-CEI-003-201-81112). It should be emphasized that the participant was informed of the procedures to be developed, and once informed and in agreement with the guidelines to be developed, the participant signed the letter of informed consent. ## 2.3. Instrumentation The instrumentation used in this study consists of an interstitial continuous glucose monitoring (CGCM) system (Freestyle libre, Abbott®, Chicago, IL, USA) for the acquisition of glucose measurements, an indirect calorimetry system (KORR Medical®, West Valley City, UT, USA) to obtain the participant’s daily energy consumption, a bioimpedance scale (BC-545 Segmental, Tanita® brand, Arlington Heights, IL, USA) and a stadiometer for the participant’s body detection, and a food intake and physical activity diary for macronutrient counting and physical activity intensity. There are no conflicts of interest in this research. There is no relationship between the suppliers of the instrumentation used in the experimentation. The instruments were purchased with funding from CONACYT (project number 320155) and TecNM (project numbers 14002.22P and 14003.22P). ## 2.4. Subject of Study The proposed study has great difficulty in its development in population studies due to the strict discipline required to carry out the dietary sequence in the required order, the filling of the food intake and physical activity diaries, and the glucose monitoring. Therefore, this study was developed on a pilot basis in a physically active healthy person (without a diagnosis of diabetes or any chronic degenerative disease) considered by the standards of a healthy person proposed by the World Health Organization (WHO) [26]. The participant is a 26-year-old male with a height of 1.78 m and a daily energy intake of 2810 calories (246 calories from physical activity, 591 calories derived from the participant’s daily activities and lifestyle, and 1973 calories from energy consumed at rest). Body measurements were taken at the beginning of each test, described in Table 1, where weight, body mass index, abdominal circumference, and percentages of muscle, fat, and visceral fat are considered. To have certainty in the information, the participant received instructions to correctly fill out the food intake and physical activity recording instruments (both devices are standardized forms). In addition, the participant was instructed to record foods that were not consumed or added to the tools. This was combined with a 24-h reminder of the food consumed, carried out by trained personnel. Regarding continuous glucose monitoring, due to the conditions of the measuring instrument, the patient was instructed to take periodic manual measurements throughout the day, avoiding more than four hours between sizes (except for the participant’s sleeping hours). The correct storage of glucose readings was corroborated with the report generated by the Abbott® platform. For more information on the monitoring system, we recommend consulting [27]. ## 2.5. Food Sequence Considering the three tests developed, in the case of Tests 2 and 3, the participant was assigned a sequence in the intake of macronutrients, sectioned according to the type of intake developed (breakfast, snack 1, lunch, dinner, and snack 2) and repeating the type of food and its quantity for 4 consecutive days exchanging the order in the consumption of macronutrients. ( *This is* because the monotony of the food makes it difficult for participants to adhere to the needs of the experiment.) The sequence is presented regarding the symbology of the proposed macronutrients starting with the symbol on the left side and ending with the symbol on the right side (VF-CH-P-FT = 1. Vegetables and Fiber, 2. Carbohydrates, 3. Proteins of animal origin, 4. Fats). Table 2 presents the sequence used for each day of intake, denoting macronutrients as follows: P: Proteins of animal origin; CH: Carbohydrates; VF: Vegetables and Fiber; D: D: Dairy; FT: Fats; FR: Fruits. ## 2.6. Proportions of Macronutrients Ingested Each food ingested by the participant was analyzed concerning its composition in carbohydrates, lipids, and proteins, obtaining for each macronutrient the weight (grams) and energetic quantity (calories) contained in the food. In addition, the percentage of energy provided by each macronutrient in each of the intakes analyzed was calculated. Table 3 presents the mean and standard deviations of the composition of each macronutrient analyzed in each intake developed for each of the three types of diets analyzed. ## 2.7. Diet for Each Test Developed Throughout the experimentation, several menus were used to ensure proper adherence to the study by the participant. In the case of Tests 2 and 3, each of the menus was appropriately developed in such a way that the proportions of macronutrients ingested were analogous. For the reader to have a clear idea of the menu composition, the following is an example for each of the menus developed in each test. ## 2.8. Glucose Curve The analysis of glycemic variability considers information on the dynamics of the glucose curves produced at each dietary intake. Figure 1 shows a contrast between the glucose measurement (left graph) and the respective magnitude of the curve generated after a meal (right graph). The beginning of the curve is the moment when food intake is generated, followed by the absorption of macronutrients by the organism, followed by a pronounced elevation in glucose levels until reaching the maximum peak, from where a decrease in glucose begins because of the homeostatic regulation process generated by the organism, thus generating abrupt changes in glucose derived from the effect of insulin secretion. Once the decline is complete, glucose stabilizes, thus attenuating the postprandial glucose curve generated. The magnitude of the glucose elevation (right graph) is calculated by subtracting the initial measurement from the glucose curve analyzed in each of the measurements over the time of the curve, thus having a magnitude of 0 mg/dL at the beginning, which over time can have positive or negative glucose concentration values due to the different types of absorption of the macronutrients ingested. ## 3.1. Glucose Measurement In each test, 14 days of glucose measurements were generated. The result of the glucose variation in each test is presented in Figure 2, positioning in the upper part the glucose dynamics according to a daily diet (Test 1), in the central part the dynamics according to a daily diet modifying the sequence in the macronutrient intake (Test 2), and finally, in the lower part the glucose variation according to a modification in the diet and sequence of macronutrient intake (Test 3). The difference between tests is clear according to each of the glucose curves, being greater in Test 1 since there is no fixed schedule for food intake, contrary to what happened in Test 3, where the time in the glucose curves is constant. Consequently, the behavior of glucose is more homogeneous. Quantitatively, the average variation between each of the tests is described in Table 4, where the glucose average data, the glucose management indicator (based on that proposed by Leelarathna et al. [ 28]), and the glucose coefficient of variation (considering that submitted by Rodbard [29]) are presented. These data were calculated for total glucose measurements in each test performed over the 14 days of size. The results demonstrate how a higher glucose variation coefficient correlates with lower glucose concentrations and a glucose management indicator. This phenomenon is visible when comparing Test 1 results with those in Test 3. Five types of intakes were generated for each day throughout the tests. The postprandial glucose average of each intake developed throughout every test is presented in Table 5, where the postprandial glucose concentrations are lower at breakfast (ranging between 83–89 mg/dL) due to starting from a condition close to the basal level, contrary to dinner where the glucose ranges between 95–100 mg/dL because the glucose curve starts from a higher level since the time gaps between each intake avoid the homeostasis to reach a basal level after a postprandial period. ## 3.2. Food Sequence Modification Effect on Glucose There are marked differences between the intakes analyzed in each test. Therefore, this work explored the magnitudes, elevation, and stabilization times in the different glucose curves developed in each intake. For evaluating the impact of the sequence of macronutrients in the dietary intake, each of the glucose curves was grouped among three different types of patterns:Carbohydrate intake at the beginning;Vegetable and fiber intake at the beginning;Animal protein intake at the beginning. The results of this analysis are presented in Table 6, highlighting the following aspects: (a) *Higher maximum* glucose peaks occur when carbohydrates are consumed first. ( b) The consumption of vegetables and fiber or proteins generates lower glucose average levels in contrast to an early consumption of carbohydrates. ( c) Early consumption of carbohydrates generates shorter periods of elevation and stabilization in the glucose curves in contrast to an early consumption of proteins or vegetables, resulting in higher glucose average levels. As a complement to Table 6, Figure 3 illustrates the dynamics of the glucose curves when the first macronutrient ingested is carbohydrates. It shows three graphs corresponding to the three main intakes (breakfast, lunch, and dinner). Each chart has two types of colors, red (referring to the results obtained in Test 2) and blue (Test 3), representative of the dynamics of the intake developed throughout the experiment, thus illustrating the postprandial glucose during a period of 5 h of measurement. Considering the numerical results presented, the postprandial glucose dynamics show lower elevations at breakfast (maximum glucose peaks below 130 mg/dL) than at lunch, where glucose peaks reach values close to 150 mg/dL, and dinner with top mounts above 150 mg/dL. ## 4. Discussion Recently, the concept regarding the sequence and frequency of macronutrient intake has gained strength due to the benefits it generates for the organism [30]. Paoli et al. [ 31] proposed an example of this, where the modification of the frequency of intakes generates benefits in the reduction of intestinal inflammation, improving autophagy, and stress resistance. Henry et al. [ 32] allude to how consuming vegetables before carbohydrates is a strategy capable of optimizing glycemic control and positively influencing postprandial glucose. On the other hand, King et al. [ 33] describe how consuming a small dose of whey protein before a macronutrient meal mix stimulates insulin generation and improves postprandial glucose in people with T2DM. Considering the current need for the generation of information and alternative solutions for the prevention of T2DM, this work developed the necessary experimentation to determine the effects of the sequence of macronutrient intake on glucose reduction after the adoption of dietary regimens that promote the decrease in glucose levels evaluated during periods of 14 days. The results presented in a preliminary way the effectiveness of the adoption of nutritional regimens focused on the anticipated consumption of vegetables, fiber, or protein for the reduction and good condition of glucose levels. The best results came from an early intake of vegetables and fiber with an average glucose of 87–95 mg/dL and peak and stabilization times starting at 2.34 and 2.96 h, respectively. In this way, pronounced postprandial glucose episodes are avoided, and the shape of the glucose curves is flattened, in contrast to early carbohydrate intake, which has peak values of 130–150 mg/dL. The development of the experimentation presents significant difficulty for the participant and the researcher who carries it out, because a dietary plan must be designed for each participant, thus promoting adherence to the menu and guaranteeing the correct performance of the experiment. The gradual variety of the menus is of utmost importance since it favors the participant’s comfort and decreases the probability of desertion during the experimentation. The participant’s correct development of the experiment must be corroborated with the filling of food diaries and continuous glucose measurements. In the case of sample scaling, it is advisable to consider those mentioned above, thus favoring the conditions for correct development in the experimentation. For population studies, two experimental periods with a duration of 14 days per period should be developed. In the first one, the glycemic variation is evaluated under an assisted diet, and in the second one, under a similar diet, but in this case varying the sequence in which the different types of macronutrients are ingested. It is recommended that there be a 14-day rest period between each of the tests. Otherwise, adherence to experimentation is challenging in the second experiment stage. The complexity of this type of research illustrates the difficulties that exist for participants in adhering to a dietary regimen [34,35]. Consequently, the primary concentration of this type of research is limited to evaluating a single glucose curve in a population of healthy individuals, as is the case of Sun et al. [ 24] with a population of 16 healthy individuals. Alternatively, research is limited to evaluating people with gestational diabetes mellitus (GDM), as proposed in Yong et al. [ 36], where, like what is presented in this work, glycemic variation is analyzed in 10 women with GDM, exchanging the sequence of macronutrients and measuring glucose with a GCM. The results of this work agree with Sun et al. [ 24] and Yong et al. [ 36], where an early consumption of vegetables, fiber, or protein reduces postprandial glucose. Taking these works as a reference point, both use only one feeding plan due to a shorter duration of the experimentation, contrary to the case presented in this work, where the time of investigation makes it necessary to change the feeding plan periodically. Classifying the macronutrients consumed in these studies is similar to the method used in this research, where food is classified according to the predominant macronutrient in its composition. The particularity of this work is focused on four specific points:Development of an analytical study on the effect of food sequence on postprandial glucose curves and the impact on glucose levels throughout the research period, with statistical analysis being a fundamental part of generating the results obtained;Experimentation time of 42 days divided into three different tests;Periodic change in meal plans to achieve patient adherence to experimentation;Contrast between three different conditions of glycemic variation derived from the tests proposed (the basis for experimentation on a more significant number of population). The experimentation developed is a pilot test that serves as a reference to evaluate the feasibility of carrying it out in a larger population under specific conditions of degradation in glucose homeostasis. Although the measures that must be taken to develop the experimentation are extensive, the benefits gained from it are significant. These can be included in the wide range of nutritional alternatives that can be proposed for the management and prevention of diabetes. One of the main benefits is the possibility of attending to the problem without generating an extra economic cost derived from its treatment. The present work opens a window of opportunities for developing several topics focused on managing and preventing metabolic diseases from a nutritional point of view. ## 5. Conclusions In this work, three types of nutritional studies were developed to analyze the effect of managing the order of macronutrient intake. The results are consistent with the literature, indicating that early consumption of vegetables, fiber, or protein reduces the size of the postprandial glucose curves, thus decreasing blood glucose levels and improving glucose homeostasis in the organism. Considering that the work is a pilot test, based on the results obtained and the recommendations proposed to carry out the experimentation, it is feasible to develop it on a larger sample scale. This research is a potential milestone for the generation of knowledge focused on improving glucose homeostasis in different treatments for diabetes. This work generates the possibility of creating alternatives for the prevention and control of type 2 diabetes based on changes in the dietary sequence and in conjunction with pharmacological treatment (in the case of diabetes) that does not generate an extra economic expense for the health sectors and the people treated in them.
# The Usage and Trustworthiness of Various Health Information Sources in the United Arab Emirates: An Online National Cross-Sectional Survey ## Abstract Background: The increase in the quality and availability of health information as well as the accessibility of Internet-based sources, has driven growing demand for online health information. Information preferences are influenced by many factors, including information needs, intentions, trustworthiness, and socioeconomic variables. Hence, understanding the interplay of these factors helps stakeholders provide current and relevant health information sources to assist consumers in assessing their healthcare options and making informed medical decisions. Aims: To assess the different sources of health information sought by the UAE population and to investigate the level of trustworthiness of each source. Methods: The study adopted a descriptive online cross-sectional design. A self-administered questionnaire was used to collect data from UAE residents aged 18 years or above between July 2021 and September 2021. Health information sources, their trustworthiness, and health-oriented beliefs were explored through univariate, bivariate, and multivariate analysis in Python. Results: A total of 1083 responses were collected, out of which 683 ($63\%$) were females. Doctors were the first source of health information ($67.41\%$) before COVID-19, whereas websites were the first source ($67.22\%$) during the pandemic. Other sources, such as pharmacists, social media, and friends and family, were not prioritized as primary sources. Overall, doctors had a high trustworthiness of $82.73\%$, followed by pharmacists with a high trustworthiness of $59.8\%$. The Internet had a partial trustworthiness of $58.4\%$. Social media and friends and family had a low trustworthiness of $32.78\%$ and $23.73\%$, respectively. Age, marital status, occupation, and degree obtained were all significant predictors of Internet usage for health information. Conclusions: The population in the UAE commonly obtains health information from doctors who have been shown to have the highest trustworthiness; this is despite it not being the most common source used. ## 1. Introduction The massive expansion of the Internet and social media, as well as its ease and wide accessibility, has led to a rise in health information-seeking behaviors. Despite a wide range of sources, accessing reliable health information remains challenging and elusive, with untrusted and uncredible sources potentially harming individuals’ health [1]. Therefore, researchers and clinicians aim to understand individuals’ health information patterns to better engage and promote successful behaviors [2]. Such behaviors include tackling health-threatening situations, making health-impacting decisions, and prioritizing preventive health habits. Sources of health information can be categorized as Internet-based, entertainment-oriented, and information-oriented. Internet-based resources comprise broadly reaching mass data, such as blogs, websites, and social media, while entertainment-oriented include TV and podcasts [3]. Comparatively, information-oriented resources include healthcare providers or printed materials such as newspapers and brochures [4]. A global review study has shown that more than half of the public uses the Internet as a source of health information [5]. Health information research also includes evaluating a multitude of determinants for each resource. For example, the trustworthiness of a health information resource heavily determines its usage frequency and value, which in turn depends on various sociodemographic features [6]. Other research focuses on the different motives behind health information searching, which include symptom troubleshooting, searching before or after a clinical visit, or obtaining information for others [7,8]. For instance, individuals with long-standing diseases need to make decisions regarding their health; therefore, such patients tend to search more for information from multiple resources to make such decisions [3]. There is a paucity of research in the Gulf region on health information sources, with most focusing on the type of resource being used, with wide variation among the results being reported. A study conducted in Saudi Arabia showed that $87.6\%$ of the participants relied specifically on doctors as their primary source of health information, whereas the Internet was not commonly used as a primary or secondary source [9]. However, a study targeting students in the Sultanate of Oman showed that the Internet and family members are more commonly utilized sources of health information compared to doctors and other experts [10]. These results align with those of a Kuwaiti university study that showed $92.6\%$ of university students using the Internet as a health information source [11]. However, for the United Arab Emirates (UAE), there are no published results regarding primary or secondary general sources of health information. However, a study conducted by Figueiras regarding COVID-19 information resources exclusively found that only $20\%$ would consult a physician [12]. Hence, understanding the different sources of health information used and the level of trust by the population in the UAE is necessary for helping individuals make informed medical decisions and evaluating healthcare options. Therefore, the aims of this study were to (a) evaluate the different information sources used by the population in the United Arab Emirates and their trustworthiness, (b) the impact of COVID-19 on the health information sources, and (c) explore the Internet as a health information source. ## 2.1. Study Design and Target Population A cross-sectional study was conducted between July 2021 and September 2021 to determine the sources of health information used by the population in the UAE. The eligibility criteria included (a) adults above the age of 18 years and (b) the ability to communicate in English and/or Arabic. Individuals younger than 18 years old and those who do not communicate in English or Arabic were excluded. This study and its protocols were reviewed and approved by the Research Ethics Committee of the University of Sharjah (REC-21-06-09-04S). ## 2.2. Questionnaire Development A self-administered questionnaire was developed based on a review of the current literature on the topic [9,13,14,15,16,17]. It was developed in English and Arabic in Google Forms and was distributed online using different social media platforms. The questionnaire was initially developed in English, and translation was performed by two of the authors, who are fluent in both languages. It consisted of three sections, the first evaluating the demographic data and assessing their health status (presence of chronic diseases, frequency of health seeking, and the subjective rating of their health). It also made use of the well-established Single Item Literacy Screener (SILS). The second section investigated the different sources used before and after the COVID-19 pandemic, the frequency of usage, and the level of trust associated with each source. The third section evaluated the effect of those sources on the participants’ knowledge and health-related decision making. The questionnaire was pilot-tested several times, and provided feedback was assessed and incorporated where appropriate. To ensure the data had no missing variables, the questions were structured as “required” in Google Forms such that the participants could not move to the next question before answering the previous one. ## 2.3. Sampling and Data Collection Sample size calculation is an essential part of any study to ensure adequate power. In this study, it was calculated using the well-established Cochran’s sample size formula, which is widely used, as can be seen in similar studies by the authors [18]. It states that for some standard normal variate z1−α2 (calculated from the confidence interval), standard deviation SD, and error d, the sample size s can be calculated as follows:s=z1−α22×SD2d2 Given the lack of any studies on the topic before, SD takes a value of 0.5 [19]. With a confidence level of $95\%$ and a margin error of $5\%$, the estimated sample size in this study was calculated to be 385. This was increased to 440, assuming a $20\%$ non-response bias. Given the non-probabilistic sampling technique used for recruitment, a total sample of 1000 was aimed for. The questionnaire was distributed through several online platforms such as e-mail, social media, and WhatsApp. A participant information sheet was presented before starting the questionnaire, and the agreement to fill out the questionnaire indicated consent to join the study. Additionally, no identifying data were collected to ensure participants’ anonymity. ## 2.4. Data Entry and Analysis Data was exported from Google Forms to CSV format and processed in python using Matplotlib-v3.3.4, pandas-v1.2.4, and statsmodels-v0.12.2. Since all questions were required, there were no missing values. For univariate analysis, categorical variables were evaluated using percentages. Age was categorized into four groups, attempting to obtain meaningful groups (below 18; above 40) while ensuring that each group has a significant number of members to assist with statistical testing, as discussed later. Likert scale questions (ranked from 1 to 5), specifically the ones dealing with Internet frequency usage and health rating, were binned into three groups taking the middle score [3] as average. Hence, any score below the middle score was considered to be below average, while any score above was taken to be above average. No outliers were detected. All demographic variables, health insurance status, health literacy, comorbidities, and health-oriented variables were used as predictors of Internet usage and knowledge source trustworthiness. Outcomes of interest were recoded into binary variables (frequency of Internet usage, doctor trustworthiness, social media trustworthiness, and Internet trustworthiness). This recoding was performed by combining the average and below-average groups into one and renaming it accordingly. This has the advantage of identifying factors associated with above-average trustworthiness and Internet frequency usage. Bivariate analyses were conducted to identify significant predictors using chi-squared tests. The significant predictors were then fed into a bivariate logistic regression model, which was evaluated using a log-likelihood ratio test. The cut-off for significance was a p-value less than 0.05. ## 3.1. Demographics A total of 1,083 responses were collected. A total of $63.07\%$ of the participants were females, and $50.32\%$ were between 19 and 29 years old. A third were UAE nationals, and nearly half were other Arab nationalities. Nearly $60\%$ were residents of Sharjah and other northern emirates. A total of $39.06\%$ of the respondents were students, and of those, $75.24\%$ were students of health-related majors. As for occupation, $10.34\%$ of all respondents were in the healthcare field. Of the total sample [1083], $72.85\%$ have health insurance, and $84.30\%$ have no long-term medical condition. Almost two-thirds had a normal reading ability of health literacy, which was assessed as the ease of understanding health information independently. More details regarding demographics can be found in Table 1. ## 3.2.1. Usage of Health Information Sources When asked about their sources of health information before COVID-19, participants reported doctors as the most common source at $67.41\%$, followed closely by websites ($62.51\%$) and social media ($51.15\%$). As for websites and social media, examples such as World Health Organization, local government websites, and local electronic newspapers were used in the questionnaire to attempt to unify the perception of the participants about what is meant by websites is accurate. However, during the COVID-19 pandemic, websites and blogs became the most used source of health information, with $67.22\%$ using it. Social media also increased to $63.99\%$, while doctors dropped to $59.19\%$. Figure 1 shows the frequencies for all health information sources surveyed. The Internet was used mostly to learn about symptoms and diagnoses ($79.22\%$), as well as to gain more information about COVID-19 ($52.72\%$). Other uses of the Internet included subjects exploring treatment options ($44.41\%$), gaining more information after a doctor’s visit ($44.32\%$), researching self-treatment methods ($37.12\%$), modifying health and lifestyle behaviors ($28.53\%$), choosing a healthcare provider ($27.89\%$), and deciding if a doctor visit is needed ($26.87\%$). The main websites used were search engines ($64.17\%$), international health agencies ($48.29\%$), and local government websites ($46.26\%$). The determinants of Internet usage were explored through bivariate and multivariate analyses. Health orientation ($p \leq 0.0005$), occupation ($p \leq 0.0005$), marital status ($$p \leq 0.00083$$), health literacy ($$p \leq 0.036$$), long-term medical conditions ($$p \leq 0.042$$), place of residence ($$p \leq 0.047$$), and age ($$p \leq 0.049$$) were shown to be significant predictors and fed into a logistic regression model. All predictors except long-term medical conditions and health literacy remained significant. People who are older than 30 years (30–39 years; $$p \leq 0.036$$, OR = 2.092 ($95\%$ CI: 1.051–4.162) and >40 years, $$p \leq 0.021$$, OR = 2.260 ($95\%$ CI: 1.131–4.513)) were more likely to use the Internet more frequently. On the other hand, married ($$p \leq 0.003$$, OR = 0.464 ($95\%$ CI: 0.280–0.769)), non-healthcare ($$p \leq 0.034$$, OR = 0.567 ($95\%$ CI: 0.335–0.960)), students of other non-health-related majors ($$p \leq 0.035$$, OR = 0.525 ($95\%$ CI: 0.289–0.957)), and unemployed individuals ($$p \leq 0.003$$, OR = 0.386 ($95\%$ CI: 0.206–0.725)) were less likely to use the Internet. Results from the binary logistic regression model can be found in Table 2. ## 3.2.2. Trustworthiness of Health Information Sources Doctors were the most trustworthy source, with $82.73\%$ stating them to be of high trustworthiness. Interestingly, while websites and blogs were the most common health information source, only $30.93\%$ found them to be highly trustworthy. Overall, the least highly trustable health information source was social media, at $10\%$. Figure 2 shows the trustworthiness of the health information sources surveyed. With regard to doctors, social media, and the Internet, additional bivariate and multivariate analyses were conducted to explore the factors correlated with higher levels of trustworthiness. With regards to doctor trustworthiness, health beliefs ($p \leq 0.0005$), marital status ($p \leq 0.0005$), health orientation ($$p \leq 0.025$$), occupation ($$p \leq 0.010$$), health consciousness ($$p \leq 0.012$$), long-term medical conditions ($$p \leq 0.025$$), and age ($$p \leq 0.027$$) were significant predictors at the bivariate level. Results of the multivariate regression showed that married individuals ($$p \leq 0.009$$, OR = 0.450 ($95\%$ CI: 0.248–0.820)) were less likely to trust doctors, while students of health-related majors ($$p \leq 0.047$$, OR = 1.876 ($95\%$ CI: 1.007–3.494)) were more likely to trust doctors, with all other variables being insignificant. Results from the binary logistic regression model can be found in Supplementary Table S1. As for social media, age ($p \leq 0.0005$), nationality ($p \leq 0.0005$), sex ($$p \leq 0.006$$), occupation ($$p \leq 0.006$$), health beliefs ($$p \leq 0.020$$), and marital status ($$p \leq 0.030$$) all were found to be significant predictors of trustworthiness. However, at the multivariate level, health beliefs, marital status, and occupation were all shown to be insignificant. Hence, overall, individuals younger than 40 years of age (19–29 years; $p \leq 0.0005$, OR = 0.161 ($95\%$ CI: 0.085–0.305) and 30–39 years; $$p \leq 0.026$$, OR = 0.333 ($95\%$ CI: 0.126–0.876)), and non-Emirati Arabs ($$p \leq 0.002$$, OR = 0.448 ($95\%$ CI: 0.267–0.749)) were all less likely to trust social media. Results from the binary logistic regression model can be found in Supplementary Table S2. Finally, the bivariate analysis showed occupation ($p \leq 0.0005$), sex ($p \leq 0.0005$), health literacy ($$p \leq 0.003$$), nationality ($$p \leq 0.020$$), and age ($$p \leq 0.020$$) to be significant in predicting Internet trustworthiness. When fed into the logistic regression model, only nationality was found to be insignificant. From the rest, only individuals with the normal reading ability of health literacy ($$p \leq 0.018$$, OR = 1.443 ($95\%$ CI: 1.066–1.952)) were more likely to trust the Internet. In contrast, individuals between 19 and 29 years of age ($$p \leq 0.026$$, OR = 0.584 (($95\%$ CI: 0.364–0.937)), females ($$p \leq 0.023$$, OR = 0.691 ($95\%$ CI: 0.503–0.950)), students of non-health-related majors ($$p \leq 0.002$$, OR = 0.385 ($95\%$ CI: 0.212–0.699)), and unemployed individuals ($$p \leq 0.033$$, OR = 0.496 ($95\%$ CI: 0.261–0.945)) were less trusting of the Internet as a health information source. Results from the binary logistic regression model can be found in Supplementary Table S3. ## 4. Discussion This study aimed to explore the different health information sources used by the population in the United Arab Emirates and to evaluate their trust in them. Before the COVID-19 pandemic, doctors were the most common source, followed closely by websites and social media. However, during the COVID-19 pandemic, the Internet rose to the first place and became the most common source of health information. However, doctors overall were still regarded as the most trustworthy source, with the Internet being considered partially trustworthy by the majority of participants. Age, sex, and occupation were all statistically significant predictors for the pattern of health information seeking and the perceived trustworthiness of each source. There was a difference in the pattern of resource preference before and after the COVID-19 pandemic, which was also reported by another research conducted in the UAE during the pandemic. The researchers reported that while websites and social media platforms were the most used sources of health information, they were not the most trusted [12]. The increase in the use of the Internet as a source of information seeking during the pandemic could be explained due to the decreased accessibility to physically consult health workers and increased health anxiety [20]. This could also explain this study’s findings since the UAE did restrict access to non-emergency health services during the pandemic. Although searching for more information regarding COVID-19 was a common reason behind Internet usage, it did not rank first in our findings. The most common reason was to learn about diseases’ symptoms and diagnoses. This presents a different picture compared with global studies where the Internet was mostly used complementarily after a doctor’s consultation [21]. Despite a fair percentage of participants ($44\%$) supplementing their information from the Internet after a doctor’s visit, it was not the most common purpose of use; in fact, an equivalently large percentage ($37\%$) reported searching for self-treatment methods over the Internet. When it comes to specific websites used over the Internet, the most used websites were search engines; interestingly, other studies in the Gulf region (Saudi Arabia and Qatar) reported similar results [13,22]. However, even while being the most used health information source over the Internet, both studies showed search engines to be not particularly well-trusted. While not explored in this study, trusted websites include those of personal doctors, medical universities, and federal medical organizations [16]. Overall, our results demonstrated that a vast majority of the participants ($82.70\%$) regard doctors as the most credible source, whereas only a third of them ranked the Internet to be of high trustworthiness. This also matches a previous study where doctors ranked first in trustworthiness, followed by pharmacists [9]. Furthermore, our results showed that more than half the participants ($56.23\%$) regard social media as partially trustworthy, in line with results from Saudi Arabia, where similar percentages distrusted the various social media sources [9]. Sbaffi and Rowley looked at the factors impacting the credibility of health information on social media platforms. Such factors included the authority of the author, the level of expertise in the field, and the objectivity of the posted information [6]. Finally, more than half of the participants partially trusted friends and family as a health information source, with another quarter reporting it being of low trustworthiness, making it the second least trustworthy source on the list. Overall, trustworthiness and determinants of Internet usage are functions of multiple sociodemographic factors. One of the variables that influence the trust of individuals in specific health resources is age; older people tend to have less trust in any resources that are not healthcare providers [23]. Moreover, we found that older people are more likely to have less trust in social media as a health information source overall. In comparison, young people tend to prioritize readily available resources, probably due to their increased information needs, which cover social, physical, cognitive, and sexual self-development processes [24]. Preference for sources also differs among males and females as well; not only do females prefer consulting more than one source, but they also tend to search for information more than males. Studies conducted in Kuwait and Egypt showed a significant association between sex and utilizing the world wide web as a health information source, where females were more likely to seek health information compared to males [11,25]. As demonstrated by carpenter et al., one of the largest sex differences was that females tend to use medication package inserts as an information source more than males [15]. In this study, we found females to be less trustworthy of both the Internet and social media. Finally, level of education is another factor influencing health information behavior; the younger and the more educated an individual is, the keener they are to use diverse sources when searching for health information [22,26,27]. ## Limitations Every study has limitations that may affect the generalizability of the results; hence, a careful review of this study’s limitations follows. The participants may not be representative of the U.A.E.’s overall population due to the convenience sampling used. Moreover, no stratification was used to attempt to achieve specific percentages for the emirates, nationality, or occupations. For example, the proportion of the specific nationalities in the sample is not consistent with the actual proportion in the general population (where locals usually account for around $10\%$ of the total population). However, care was taken during sampling to be inclusive and attempt to target all sectors of the community, and each group ended up having sufficient members for statistical analysis. In addition, the sample consisted of a lower percentage of the older age groups, which may lead to bias. Given that older people may suffer from more long-term conditions and may need increased healthcare, this may affect the results and reveal different patterns of trustworthiness. Therefore, future studies could collect similar data from a larger sample and attempt to include older individuals. However, information access patterns by younger demographics are still relevant, given their unique healthcare challenges, as discussed above. No information was collected regarding the trustworthiness and frequencies of the websites being used by the participants. Similarly, no information regarding socioeconomic status (or an equivalent proxy) was collected. It is worth noting, however, that even then, the analysis above revealed several relations with the collected demographics. Finally, since this is a cross-sectional study, future prospective studies could be conducted to assess whether individuals consistently use the same sources of health information and the reasons behind it. Such studies can also evaluate other parameters of health information sources, such as accuracy and reliability. They can also attempt to address some of the limitations discussed here. ## 5. Conclusions This research aimed to explore health information sources being utilized by the population in the UAE and the trustworthiness of each. While doctors used to be the most common health information source, the pandemic influenced health information-seeking patterns by prioritizing online sources such as social media and the Internet significantly increased. However, participants still recognized doctors as the most trusted source by the population in contrast to social media and friends and family, which were the least trusted sources. Finally, bivariate and multivariate analyses revealed a complicated interplay between source usage, source trustworthiness, and sociodemographic factors, most in line with global and regional studies.
# Upregulation of TLR4-Dependent ATP Production Is Critical for Glaesserella parasuis LPS-Mediated Inflammation ## Abstract Glaesserella parasuis (G. parasuis), an important pathogenic bacterium, cause Glässer’s disease, and has resulted in tremendous economic losses to the global swine industry. G. parasuis infection causes typical acute systemic inflammation. However, the molecular details of how the host modulates the acute inflammatory response induced by G. parasuis are largely unknown. In this study, we found that G. parasuis LZ and LPS both enhanced the mortality of PAM cells, and at the same time, the level of ATP was enhanced. LPS treatment significantly increased the expressions of IL-1β, P2X7R, NLRP3, NF-κB, p-NF-κB, and GSDMD, leading to pyroptosis. Furthermore, these proteins’ expression was enhanced following extracellular ATP further stimulation. When reduced the production of P2X7R, NF-κB-NLRP3-GSDMS inflammasome signaling pathway was inhibited, and the mortality of cells was reduced. MCC950 treatment repressed the formation of inflammasome and reduced mortality. Further exploration found that the knockdown of TLR4 significantly reduced ATP content and cell mortality, and inhibited the expression of p-NF-κB and NLRP3. These findings suggested upregulation of TLR4-dependent ATP production is critical for G. parasuis LPS-mediated inflammation, provided new insights into the molecular pathways underlying the inflammatory response induced by G. parasuis, and offered a fresh perspective on therapeutic strategies. ## 1. Introduction Glaesserella (Haemophilus) parasuis (G. parasuis), a gram-negative bacterial species, is the etiologic agent of pigs Glässer’s disease which is characterized by fibrinous polyserositis, polyarthritis and meningitis in pigs [1,2]. In addition, it can be a contributor to swine respiratory disease and is found as a commensal bacterium in the nasal cavity of healthy swine [3]. Recently, G. parasuis has become one of the major causes of nursery morbidity and mortality in swine herds, resulting in significant economic losses in the pig industry [4]. So far, 15 serovars of G. parasuis have been identified, but >$20\%$ of isolates have not been isolated yet [5,6]. The serovar is thought to be an important virulence marker in G. parasuis [7]. G. parasuis serovars 4, 5, and 13 are the current epidemic strains in China, according to epidemiological studies, with serovar 5 of the organism being considered to be highly virulent and serovar 4 to be moderately virulent. [ 8,9]. Therefore, managing infection brought on by G. parasuis is essential since it is one of the most significant bacterial respiratory infections in pigs. Porcine alveolar macrophages (PAMs) are regarded as a crucial line of defense against G. parasuis infection in outbreaks of Glässer’s disease [10]. PAMs release pro-inflammatory and anti-inflammatory cytokines and chemokines to draw leucocytes to the infection site after recognizing the cell structures on the surface of the bacterium, phagocytosing, and lysing it [11,12,13]. However, the factors responsible for systemic infection and inflammatory responses of G. parasuis have not yet been fully clarified. Thus, the discovery of novel regulatory factors of G. parasuis-induced inflammatory responses may be an alternative strategy for the prevention and control of Glasser’s disease in swine production systems. Because of sickness, aging, or damage, many cells die at this certain point. Defects can impair cell development and ultimately result in a number of illnesses, such as autoimmune disorders, cancer, or infections [14]. Recently, the field of cell death has rapidly advanced, and multiple cell death pathways have been discovered, including apoptosis, necroptosis, pyroptosis, ferroptosis, and autophagy-dependent cell death. Studies have shown that a large number of effectors of cell death can regulate activation of the NOD-like receptor (NLR) family pyrin domain containing 3 (NLRP3) inflammasome, and NLRP3 inflammasome activation can lead to cell death [15,16]. At the moment, it is widely acknowledged that ligands for Toll-like receptors (TLRs), cytokine receptors (such as the IL-1 receptor and the TNF-α receptor), or NLRs can cause the activation of the transcription factor NF-κB and boost the production of NLRP3 and pro-IL-1β [17,18]. Lipopolysaccharide (LPS) is the most abundant component within the cell wall of Gram-negative bacteria, playing a vital role in the way bacteria interact with the environment and the host. LPS can lead to an acute inflammatory response toward pathogens [19,20]. Toll-like receptor 4 (TLR4), acting as a receptor for LPS, has a pivotal role in the regulation of immune responses to infection [21]. The binding of LPS to TLR4 leads to the activation of NF-κB which plays a crucial role in regulating the transcription of genes related to innate immunity and inflammation responses in the lungs and in monocytes [22]. Trimeric, non-selective cation channels P2X receptors are triggered by extracellular ATP. Because it plays a role in the pathways of apoptosis, inflammation, and tumor growth, the P2X7 receptor subtype is a therapeutic target [23,24]. Acute immobilization stress has been shown to activate P2X7 receptors in a significant quantity of extracellular ATP, which in turn activates NLRP3 and causes the production of inflammatory cytokines [25]. The P2X7R also activates intracellular pathways unrelated to the inflammasomes but frequently associated with them in order to increase inflammation. The activation of NF-κB, a transcription factor that regulates the production of various inflammatory genes such as TNFα, COX-2, and IL-1β, is perhaps one of the best characterized [26,27]. In this research, we explore the role of the ATP/P2X7 receptor axis on G. parasuis-induced Glässer’s disease, and the contribution of NLRP3 inflammasome to this pathological process. To further investigate the underlying causative processes of Glässer’s disease, we also explored the effects of various antagonist, agonists, and pathway inhibitors on P2X7 expression and activation. Collectively, these findings could provide a novel viewpoint on treatment options for Glässer’s disease. ## 2.1. Bacterial Strain and Cell Culture G. parasuis serovar 5 stain LZ was isolated in our lab. Bacteria were grown on Trypticase Soy Agar and in Trypticase Soy Broth, respectively (TSA and TSB; OXOID), at 37 °C with the addition of $0.01\%$ nicotinamide adenine dinucleotide (NAD) and $5\%$ (v/v) inactivated bovine serum. The RPMI1640 medium (Solarbio, Beijing, China) containing $10\%$ fetal bovine serum (FBS) (10091148, Gibco, New Zealand) and $1\%$ pen/strep solution (Solarbio, China) was used to maintain porcine alveolar macrophages (PAM) 3D$\frac{4}{2}$ cells (ATCC: CRL-2845) at 37 °C in a $5\%$ CO2 incubator. ## 2.2. Cell Viability Assay To determine cell viability, the Cell Counting Kit-8 (CCK-8) assay (Beyotime, Shanghai, China) was used. Briefly, in a 96-well plate, PAM cells were planted and either received G. parasuis LZ/LPS treatment or not. After 24 h, 10 μL of CCK-8 solution was added to each well and incubated at 37°C for 2 h. The absorbance at a wavelength of 450 nm was read using a microplate reader (SpectraMax® M5, Molecular Devices, San Jose, CA, USA). ## 2.3. LPS Extraction and Quantification LPS component of G. parasuis LZ was extracted using a Lipopolysaccharide Isolation Kit (Sigma, MAK339, St. Louis, MO, USA). LPS concentrations were determined with Pierce LAL Chromogenic Endotoxin Quantitation Kit (Thermo Fisher Scientific, New York, NY, USA) following the manufacturer’s instructions. In the RPMI1640 medium, LPS was diluted to a storage concentration of 1 mg/mL. ## 2.4. EdU (5-ethynyl-2′-deoxyuridine) Incorporation Assay The BeyoClickTM EdU Cell Prolifer-ation Kit with Alexa Fluor 555 (Beyotime Biotechnology, Haimen, China) was used to conduct cell proliferation tests in accordance with the manufacturer’s recommendations. PAM cells were treated, then incubated with 10 μm EdU for 2 h at 37 °C. Then cells were subjected to $4\%$ para-formaldehyde fixation and $0.5\%$ Triton X-100 permeabilization steps at room temperature. After the fixatives were removed, $2\%$ BSA in PBS was used to wash the cells. PAM cells were stained with DAPI and treated in Click Additive Solution while being shielded from light. In the following step, a Leica SP8 confocal microscope was used to capture the fluorescence images of the EdU inclusion samples. ## 2.5. ATP Assays The ATP levels of infected PAM cells were detected by an Enhanced ATP Assay Kit (S0027, Beyotime Biotechnology, Shanghai, China) based on the manufacturer’s instructions. Total ATP levels of PAM cells were quantified by firefly luciferase detection using a luminometer (Tecan Infinite 200pro) and calculated the ATP concentrations (nmol/μg) were based on ATP standard curve. ## 2.6. Enzyme-Linked Immunosorbent Assay (ELISA) The samples in the medium during cell culture were collected at 4 °C and then added to a 96-well ELISA plate. To measure releases of inflammation-related cytokines from the cells, IL-1β Porcine ELISA Kit (ESIL1B, Invitrogen, Carlsbad, CA, USA) was performed according to the instructions. The absorption value at 450 nm was read by a microplate reader (SpectraMax® M5, Molecular Devices). ## 2.7. RNA Isolation and cDNA Synthesis 24 h after cells were treated with G. parasuis LZ, total RNA was extracted using the TRIzol (Life Technologies, Grand Island, NY, USA) technique. After re-suspending whole RNA pellets in RNase-free water, RNA was measured using $\frac{260}{280}$ UV spectrophotometry. Next, potentially contaminated DNA was removed by treating the samples with DNase I (Life Technologies). Then, in a 20 μL reaction mixture, 1 μg of total RNA from each sample was reverse transcribed using a ReverTra Ace qPCR RT Kit (TOYOBO, Osaka, Japan) to produce first-strand cDNA. The cDNA was then placed in a freezer before being used. ## 2.8. Quantitative Reverse Transcription Polymerase Chain Reaction (qRT-PCR) qRT-PCR was performed to measure mRNA expression with the following primers (IL-1β-F: TCTGCCCTGTACCCCAACTG, IL-1β-R: CCCAGGAAGACGGGATTT; β-actin-F: TCTGGCACCACACCTTCT, β-actin-R: GATCTGGGTCATCTTCTCAC). qRT-PCR was performed with SYBR® Green Real-time PCR Master Mix (TOYOBO, Osaka, Japan). cDNA synthesized in 2.7 was used in this chapter. The following cycling circumstances existed: after a denaturation stage at 95 °C for 30 min, 40 cycles of conventional PCR are performed. Melting curve analysis was used to determine the amplified products’ specificity. The 2−ΔΔCt technique was used for quantification. The expression of β-actin mRNA, which was consistent across all samples, was used to standardize gene expression values. ## 2.9. Western Blot By lysing the cells with ice-cold RIPA buffer supplemented with a protease inhibitor cocktail, total cellular protein lysates were produced (Merck Millipore, Darmstadt, Germany). Following BCA protein quantification, samples were run through SDS-PAGE and then transferred to PVDF membranes. Membranes were incubated with the primary antibodies for an overnight period at 4 °C and with the secondary antibodies for an hour at room temperature following blocking with $5\%$ skim milk. Then, the membrane was visualized with enhanced chemiluminescence and quantified by densitometry. All proteins were normalized to the level of β-actin. The main antibodies were mouse anti-β-actin antibodies and those against NF-κB, p-NF-κB, GSDMD, NLRP3, IL-1β, caspase1, and P2X7 receptor from Cell Signaling Technology in the United States. The secondary antibodies were goat anti-rabbit and goat anti-mouse antibodies (Beyotime, China). Image J software was used to quantify the gray values of protein bands. ## 2.10. Immunofluorescence and Imaging Analysis PAM were plated on a laser confocal Petri dish. Following the desired treatments, cells were fixed with $4\%$ paraformaldehyde for 10 min and permeabilized with $0.25\%$ Triton X-100 at room temperature for 15 min. Cells were blocked with $5\%$ goat serum for 50 min at room temperature before being incubated with primary NF-κB antibodies (1:200) overnight at 4 °C. The cells were stained with secondary antibodies (1:400) for 1 h after being washed with PBS. All dishes were mounted after being DAPI stained to identify nuclei. All slides were then mounted with ProLongTM Gold Anti-fade mountant. A Leica SP8 confocal microscope was used to capture the immunofluorescence images. ## 2.11. Plasmids and Transfection Plasmids, negative control (sense UUCUCCGAACGUGUCACGUTT, antisense ACGUGACACGUUCGGAGAATT) and TLR4-siRNA (sense CAG-GAAUCCUGGUCUAUAATT, antisense UUAUAGACCAGGAUUCCUGTT), are were synthesized by Sangon (China). LipofectamineTM 3000 (Invitrogen, Carlsbad, CA, USA), transfections were carried out in accordance with the manufacturer’s instructions. In a nutshell, PAM cells were plated in six wells and transfected with 1 mg of plasmid when they were 30–$50\%$ confluent. After 24 h of incubation, cells were treated with LPS for further expression. ## 2.12. Plasmids and Transfection Statistical Analysis: The reported results were statistically evaluated using the paired Student’s t-test method and comparisons between more than two groups were obtained using ANOVA. The reported values are expressed as mean standard errors (SEM). The graphs were plotted using GraphPad Prism version 7.0 (GraphPad Software, La Jolla, CA, USA). Asterisks were used to denote significant values (* $p \leq 0.05$ and ** $p \leq 0.001$), whereas ns values ($p \leq 0.05$) were used to denote non-significant values. Each experiment included at least three replicates. ## 3.1. G. parasuis LPS Enhanced the Mortality and the ATP Level of PAM Cells We first examined the effect of G. parasuis on the viability of PAM cells. PAM cells were treated with G. parasuis LZ at MOI = 10 for 8 h. Compared with the mock group, the viability of PAM cells in the G. parasuis LZ group was lower (** $p \leq 0.01$) (Figure 1A). As well, the LPS of G. parasuis LZ also resulted in the cell viability decreases when compared with the mock group (** $p \leq 0.01$) (Figure 1B). To further investigate the effect of G. parasuis LZ and LPS on PAM proliferation, EdU staining was utilized. Results of the EdU staining showed that red fluorescence which represents proliferating PAM cells is significantly inhibited by G. parasuis LZ and LPS compared with the mock group (** $p \leq 0.01$) (Figure 1C,D). Extracellular ATP causes the cell membrane to become permeable and induces changes within the cell that could lead to apoptosis [27]. We test the level of extracellular ATP, and found that G. parasuis LZ and LPS significantly enhanced ATP levels (** $p \leq 0.01$) (Figure 1E,F). These results suggested that LPS-enhanced mortality may have a relationship with elevated extracellular ATP levels, and LPS may play a key role in the pathogenesis of G. parasuis. ## 3.2. ATP-Induced Pyroptosis and Activated P2X7R Pathway Although most of the ATP is located intracellularly, it is released into the extracellular space under specific conditions, where it is a relevant signaling molecule. It activates P2X7 and increases inflammatory cytokine levels [28]. So we hypothesized that LPS could induce cellular inflammation by releasing ATP. In order to test it, we regulated the concentration of extracellular ATP in different ways, then observed the effect on IL-1β. The expression of IL-1β in the ATP-added group was higher than G. parasuis LZ only group (** $p \leq 0.01$) (Figure 2A). Nigericin (similar to ATP) also enhanced the expression of IL-1β. While apyrase (a highly active ATP-diphosphohydrolase) reduced the enhanced IL-1β level (** $p \leq 0.01$) (Figure 2A). As well, in Figure 2B, similar results were shown. We also test the mRNA level of IL-1β, and the results were consistent with Figure 2B. As shown in Figure 2D, LPS accelerated the expressions of P2X7R and NLRP3, and Nigericin further increase the expressions (** $p \leq 0.01$). We also tested the expressions of NF-κB and p-NF-κB, and found that NF-κB was activated by LPS (** $p \leq 0.01$), and Nigericin enhanced the expression (* $p \leq 0.05$). These results revealed that LPS-induced release of ATP-activated inflammation. Physiological roles for GSDMD in both pyroptosis and IL-1β release during inflammasome signaling have been extensively characterized in macrophages and other mononuclear leukocytes. Assembly of N-GSDMD pores in the plasma membrane markedly increases its permeability to macromolecules, metabolites, ions, and major osmolytes, resulting in the rapid collapse of cellular integrity to facilitate pyroptosis [29]. As well, in this study, LPS treatment prominently increased the expression of N-GSDMD (** $p \leq 0.01$) (Figure 2D), Nigericin further increased the expression of N-GSDMD (* $p \leq 0.05$) which meant that pyroptosis was activated. All these results suggested that ATP-induced pyroptosis was through ATP/P2X7R pathway. ## 3.3. LPS-Induced Pyroptosis through Activated P2X7R Pathway To further explore the relationship between P2X7R and pyroptosis, we used 10 μM A740003 (P2X Receptor Antagonist) to treat PAM cells. First, we tested the expression of P2X7R, and found that LPS-enhanced P2X7R was inhabited by A740003. This result meant A740003 worked very well (* $p \leq 0.05$) (Figure 3A). Then the expression of NLRP3 was observed, A740003 also reduced NLRP3 level significantly (* $p \leq 0.05$) (Figure 3A), P2X7R was involved in LPS-induced pyroptosis. As well, A740003 inhibited the expression of NF-κB and p-NF-κB compared with cells infected with the LPS group (** $p \leq 0.01$), meaning that NF-κB may be downstream of P2X7R in this study. When treated with A740003, the level of N-GSDMD was reduced compared with LPS-only group (* $p \leq 0.05$) (Figure 3B). As well, the level of f IL-1β showed the same result (Figure 3C). We tested A740003 influence on the PAM cells’ survival rate, and found that LPS increases the mortality of PAM cells, when treated with A740003, the mortality decreased (* $p \leq 0.05$). According to the results of immunofluorescence, NF-κB p65 expression was elevated and more protein entered into the nucleus. These results indicated that the P2X7R pathway plays a central role in the pathogenesis of G. parasuis. ## 3.4. NLRP3 Was Involved in the Formation of Inflammation To better verify the role of the formation of inflammation in cell death, MCC950 (a potent and specific inhibitor of the NLRP3 inflammasome) was utilized in this study. First, we treated cells with different concentrations of MCC950, then observed the expression of NLRP3. Compared with the LPS group, MCC950 markedly reduced the expression of NLRP3 in a concentration-dependent manner (** $p \leq 0.01$) (Figure 4A). We also detected the expression of caspase 1, showing the same rule (Figure 4A). Subsequently, we tested the level of GSDMD. Compared with the LPS group, MCC950 could significantly reduce the expression of GSDMD (** $p \leq 0.01$) (Figure 4B). Then we tested the content of IL-1β in the culture medium by ELISA, and found that MCC950 also significantly reduced the secretion of IL-1β (** $p \leq 0.01$) (Figure 4C). Finally, the cell survival rate was measured by CCK8, and data showed MCC950 could significantly reduce the cell mortality rate that was increased by LPS (** $p \leq 0.01$). These results suggested that the formation of inflammasome bodies plays a key role in G. parasuis infection. ## 3.5. LPS Induced Inflammation in a TLR4-Dependent Manner Toll-like receptor 4 (TLR4), acting as a receptor for LPS, has a pivotal role in the regulation of immune responses to infection [21]. The binding of LPS to TLR4 leads to the activation of NF-κB which plays a crucial role in regulating the transcription of genes related to innate immunity and inflammation responses in the lungs and in monocytes [22]. To prove that TLR4 plays an important role in G. parasuis infection, we used miRNA silencing technology to verify it. First, we tested the silence efficiency of siRNA and found that the siRNA significantly reduced the mRNA level of TLR4 (** $p \leq 0.01$), meaning that this siRNA worked well (Figure 5A). Then we observed the effect of TLR4 on ATP levels. Compared with the negative control group, we found that after silencing TLR4, ATP level decreased significantly (** $p \leq 0.01$) (Figure 5B). In addition, silencing TLR4 significantly restored cell death caused by LPS (** $p \leq 0.01$) (Figure 5C). Then, we detected the influence of TLR4 on the downstream inflammatory pathway, and found that the expressions of p-NF-κB and NLRP3 decreased, and TLR4 knockout decreased the activation of the NLRP3 inflammasome (** $p \leq 0.01$) (Figure 5D). These data evidently suggest that LPS induced inflammation in a TLR4-dependent manner. ## 4. Discussion G. parasuis is the source of Glässer’s disease, which can lead to acute septicemia in non-immune high-health status pigs of all ages and cause instances of arthritis, fibrinous polyserositis, severe pneumonia, and meningitis in piglets worldwide [30]. In this research, we explored the role of the ATP/P2X7 receptor axis on G. parasuis-induced Glässer’s disease, and the contribution of NLRP3 inflammasome to this pathological process. Bacterial lipopolysaccharides (LPS) are the major outer surface membrane components present in almost all Gram-negative bacteria and act as extremely strong stimulators of innate or natural immunity in diverse eukaryotic species ranging from insects to humans [31,32]. No matter the kind of bacteria involved or the infection location, bacterial adaptation alterations, such as modification of LPS production and structure, are a common motif in infections [33,34]. Generally speaking, these modifications cause the immune system to evade detection, persistent inflammation, and enhanced antimicrobial resistance [35]. LPS derived from *Escherichia coli* (E. coli) is a well-characterized inducer of inflammatory response in vivo that activates cytokine expression via NF-κB and MAPK signaling pathway in a TLR4-dependent manner [36]. According to studies, pseudomonas aeruginosa (P. aeruginosa) LPS changes appear to be a key element in this pathogen’s ability to adapt to chronic infection. Over the duration of the chronic P. aeruginosa infection, decreased LPS immunostimulatory potential helps the immune system avoid detection and survive [37]. It has been reported that anti-LPS antibodies can protect against mortality caused by hematogenous *Haemophilus influenzae* type b meningitis infections in infant rats [38]. In this study, we found that G. parasuis LZ induced cells death and severe inflammation in PAM cells (Figure 1A and Figure 3A), and LPS derived from G. parasuis LZ treatment group also has similar phenomena, these suggested that G. parasuis LPS plays a key role in host-pathogen interactions with the innate immune system. Pyroptosis is an inflammatory form of cell death that is brought on by certain inflammasomes [39,40]. This kind of cell death causes the cleavage of gasdermin D (GSDMD) and the activation of dormant cytokines like IL-18 and IL-1β. Cell enlargement, lysis of the plasma membrane, fragmentation of the chromatin, and release of the pro-inflammatory substances inside the cell are all effects of pyroptosis [41]. The conventional inflammasome pathway, a noncanonical inflammasome pathway, and a newly discovered pathway are the pathways that cause pyroptosis [42,43]. Caspase-11 may selectively attach to the lipid A of intracellular LPS, which causes it to oligomerize, engage its proteolytic activity, and cleave the GSDMD to create a large number of holes in the cell membrane, ultimately causing membrane lysis and pyroptosis [44]. As well, the extracellular LPS stimulation of neutrophils can also activate the TLR4-P38-Cx43 pathway to autocrine ATP extracellularly [45]. The extracellular ATP could gather NLRP3 inflammasomes and subsequently activate the pro-caspase 1 through the P2X7 pathway, resulting in pyroptosis [46]. In this study, we found that G. parasuis LZ LPS induced cell death and promoted the increase of ATP content, thus activating the P2X7 pathway, promoting the development of IL-1β, and cleavage of GSDMD, leading to pyroptosis. This is consistent with the canonical inflammasome pathway. Luo et al. have reported that G. parasuis induces an inflammatory response in PAM cells through the activation of the NLRP3 inflammasome signaling pathway [30], which is consistent with our result. G. parasuis, an opportunistic pathogen of the lower respiratory tract of pigs, is also associated with pneumonia and is involved in the porcine respiratory disease complex [47]. Secondary G. parasuis infection enhances highly pathogenic porcine reproductive and respiratory syndrome virus (HP-PRRSV) infection-mediated inflammatory responses [48]. The polarization of LPS-stimulated PAMs toward M1 PAMs greatly reduces PRRSV replication [49], mainly because LPS reduced the level of CD163 expression to inhibit PRRSV infection via TLR4-NF-κB pathway [30]. In this study G. parasuis LPS activated inflammatory responses through TLR4-NF-κB pathway, and combined with the above reference, we got the hypothesis that G. parasuis infection can significantly inhibit PRRSV replication through downregulation of CD163 expression via TLR4-NF-κB pathway. However, this hypothesis needs further verification. In conclusion, G. parasuis induced PAM cell damage mainly through included pro-inflammatory and pro-pyroptosis events. The NLRP3 inflammasome in PAM cells plays a crucial role in G. parasuis-induced cells death and both TLR4- and P2X7R-dependent pathways are alternative signaling pathways required for NLRP3 inflammasome activation during the development of G. parasuis-induced Glässer’s disease. This work provides new insights into the molecular pathways underlying the inflammatory response induced by G. parasuis and a new perspective to inform the targeted treatment of G. parasuis-induced Glässer’s disease.
# Dietary Intake of Anthocyanidins and Renal Cancer Risk: A Prospective Study ## Abstract ### Simple Summary In this large prospective study based on the PLCO trial, both categorical analysis and continuous analysis indicated that higher dietary anthocyanidin consumption was associated with a lower risk of renal cancer. To the best of our knowledge, this is the first prospective study that aimed to explore a potential association between dietary anthocyanidin intake and renal cancer risk. ### Abstract Evidence on the association between anthocyanidin intake and renal cancer risk is limited. The aim of this study was to assess the association of anthocyanidin intake with renal cancer risk in the large prospective Prostate, Lung, Colorectal and Ovarian (PLCO) Cancer Screening Trial. The cohort for this analysis consisted of 101,156 participants. A Cox proportional hazards regression model was used to estimate the hazard ratios (HRs) and the $95\%$ confidence intervals (CIs). A restricted cubic spline model with three knots (i.e., 10th, 50th, and 90th percentiles) was used to model a smooth curve. A total of 409 renal cancer cases were identified over a median follow-up of 12.2 years. In the categorical analysis with a fully adjusted model, a higher dietary anthocyanidin consumption was associated with a lower risk of renal cancer (HRQ4vsQ1: 0.68; $95\%$ CI: 0.51–0.92; p for trend < 0.010). A similar pattern was obtained when anthocyanidin intake was analyzed as a continuous variable. The HR of one-SD increment in the anthocyanidin intake for renal cancer risk was 0.88 ($95\%$ CI: 0.77–1.00, $$p \leq 0.043$$). The restricted cubic spline model revealed a reduced risk of renal cancer with a higher intake of anthocyanidins and there was no statistical evidence for nonlinearity (p for nonlinearity = 0.207). In conclusion, in this large American population, a higher dietary anthocyanidin consumption was associated with a lower risk of renal cancer. Future cohort studies are warranted to verify our preliminary findings and to explore the underlying mechanisms in this regard. ## 1. Introduction The incidence of and costs related to renal cancer have increased during the last two decades. As the population ages, the prevalence of established risk factors such as obesity, hypertension and chronic kidney disease increases, and the expansion of routine imaging for many disorders means that the renal cancer burden will increase significantly [1,2]. The management of renal cancer has evolved rapidly in recent years with several immunotherapy-based combinations of strategies approved as first-line therapies for the metastatic disease. However, renal cancer remains one of the most lethal urological malignancies. According to the updated data reported by the World Health Organization, there were more than 140,000 renal cancer related deaths worldwide in 2012 [3]. With rising rates of recurrence, aside from developing a personalized therapeutic treatment plan with minimal adverse events [4], it is fundamentally important to improve cancer prevention by identifying the potential factors associated with its risk. Recent evidence has suggested that dietary flavonoid intake may be associated with decreased risk of chronic and degenerative diseases [5]. Flavonoids are classified into 12 major subclasses based on chemical structures and different subclasses may have different effects on human diseases. Anthocyanins are colored water-soluble pigments belonging to flavonoids, which provide red, blue and purple colors to fruits and vegetables. Anthocyanin pigments have been widely used as natural food colorants [6]. Recently, these colored pigments were found to have potent antioxidant properties, which give various beneficial health effects on cardiovascular [7] and neurodegenerative diseases, as reported by scientific studies from cell culture, animal models and clinical trials [8]. Similarly, dietary anthocyanidin intake has been found to be associated with a lower risk of several cancers, including lung cancer [9], head and neck cancer [10], and esophageal cancer [11]. To the best of our knowledge, evidence on the association between anthocyanidin intake and renal cancer risk is limited. An early hospital-based case-control study from Italy was undertaken on this topic and found no significant association between anthocyanidin consumption and renal cell carcinoma [12] based on 767 RCC cases and 1534 hospital controls. Therefore, the aim of this study was to assess the association of anthocyanidin intake with renal cancer risk in the large prospective Prostate, Lung, Colorectal and Ovarian (PLCO) Cancer Screening Trial. ## 2.1. Study Design and Population The PLCO Cancer Screening Trial was a multicenter randomized controlled trial designed to assess whether screening exams could reduce the mortality from prostate, lung, colorectal, and ovarian cancers, and its study design and implementation were described previously [13]. There were two arms in the PLCO trial: the intervention arm and the control arm. A total of 76,682 men and 78,215 women aged between 55 to 74 years were enrolled in the PLCO study between November 1993 and September 2001 in ten screening centers across the United States of America (Washington, Pittsburgh, Honolulu, Denver, Marshfield, Minneapolis, Birmingham, Salt Lake City, Detroit, and St Louis). The principal recruitment strategy targeted individuals from the general population residing in the nearby areas of the screening centers. Participants in the intervention arm were screened during the first 3–4 years and participants in both arms of the study were subsequently followed for up to 10 more years to determine the potential benefits or harms of the screening exams. From its inception, the PLCO was designed not only as a RCT for the screening for four cancers but also more broadly as a research enterprise consisting of the trial, being a large, well-characterized cohort with all-cancer outcomes [14]. The current analysis is a secondary analysis of a primary database from the PLCO study. In total, 4918 participants were excluded because of a lack of baseline questionnaire data. Participants who did not complete a valid questionnaire or had been diagnosed with any cancer were also excluded ($$n = 48$$,237). We further excluded individuals with an implausible energy intake (i.e., lowest or highest $1\%$) ($$n = 546$$), with renal pelvis cancer ($$n = 34$$), and without follow-up time ($$n = 6$$). Overall, the cohort for this analysis consisted of 101,156 participants. Written informed consent was obtained from all the study participants, and the study protocol was approved by the Institutional Review Board of the NCI. The data used in this study were applied from the PLCO website with the permission of the NIH PLCO study group (CDAS project “PLCO-1020”). ## 2.2. Data Collection Participants were arranged to complete a self-administered questionnaire containing personal baseline information. From the PLCO study, we collected information regarding age, gender, body mass index (BMI), race/ethnicity, education level, smoking status, and history of hypertension. The Diet History Questionnaire (DHQ) version 1.0 (National Cancer Institute, 2007) was used to collect the dietary information, including the total daily energy intake and the daily intake of anthocyanidins. The DHQ recorded the frequency and quantity of 124 food items and supplements used over the past 12 months [15]. The daily frequency of food consumption was then multiplied by the representative sex-specific portion size of the food item using food composition data, which was based on the United States Department of Agriculture 1994–1996 Continuing Survey of Food Intakes by Individuals (CSFII) and the University of Minnesota’s Nutrition Data Systems for Research [16]. The DHQ was found to fare as well as or better than two widely used food frequency questionnaires (FFQs) when the PLCO trial was conducted [15]. Cyanidin, delphinidin, malvidin, peonidin, petunidin, and pelargonidin are the six common types of anthocyanidins [6,8]. In this study, the daily intake of these subclasses was collected through the DHQ. The amounts for processed foods were assumed to be $50\%$ of the raw foods to account for the losses during processing [9]. The total daily intake of anthocyanidins was the sum of all six classes. ## 2.3. Renal Cancer Ascertainment In this study, the outcome was the incidence of renal cancer. In the PLCO trial, the confirmation of the diagnosis of renal cancer was obtained from the study update forms which were mailed to participants annually asking about cancer diagnosis in the prior year. The follow-up time was based on reports from physicians and the results were confirmed by periodic linkage to the state cancer registries and the death certificates. In this study, a renal cancer case was defined as a malignant neoplasm of unspecified kidney, except for the renal pelvis (2022 ICD-10-CM Diagnosis Code C64.9). Follow-up started one year after completion of the DHQ and continued until the participants were diagnosed with cancer, withdrew from the trial, died from any cause, or completed the 10-year follow-up, whichever came first. ## 2.4. Statistical Analysis A Cox proportional hazards regression model was used to estimate the hazard ratios (HRs) and $95\%$ confidence intervals (CIs). The models were adjusted for potential confounders including age (continuous), sex, race (white versus non-white), BMI (<25.0 kg/m2 versus ≥25.0 kg/m2), education level (≤high school versus ≥some college), smoking status (ever versus current versus never), hypertension status (yes versus no), and total energy intake (continuous). The proportional hazards (PH) assumption was examined using the Schoenfeld residual test [17]. To assess the statistical significance of the potential differences across subgroups, Wald tests were performed on the interaction terms between anthocyanidin intake and the stratifying covariates. A restricted cubic spline model [18] with three knots (i.e., 10th, 50th, and 90th percentiles) was used to evaluate the non-linearity of the associations. In a sensitivity analysis, we excluded cases diagnosed within the first two years of follow-up and then repeated the analysis. All statistical analyses were performed using the software STATA version 15 (Stata Corp, College Station, TX, USA). All tests were two-sided. ## 3.1. Study Characteristics A total of 409 renal cancer cases were identified over a median follow-up of 12.2 years. Anthocyanidins from the diet ranged from 0 to 237.36 mg/day (median value: 12.17 mg/day). Table 1 shows the characteristics of participants by quartiles of anthocyanidin consumption. Overall, compared to participants with a lower intake of anthocyanidins, those with a higher consumption tended to be older, and were more likely to be female, non-Hispanic white, and never smokers at baseline. They also had a lower BMI but a higher rate of hypertension (all $p \leq 0.001$). ## 3.2. Dietary Anthocyanidin Intakes and Renal Cancer Risk As shown in Table 2, in categorical analyses with a fully adjusted model, a higher dietary anthocyanidin consumption was associated with a lower risk of renal cancer (HRQ4vsQ1: 0.68; $95\%$ CI: 0.51–0.92; p for trend <0.010). A similar pattern was obtained when anthocyanidin intake was analyzed as a continuous variable. The HR of one-SD increment in the anthocyanidin intake for renal cancer risk was 0.88 ($95\%$ CI: 0.77–1.00, $$p \leq 0.043$$). The proportional hazards assumption was verified using Schoenfeld residuals. Table 3 shows the effects of the subclasses of anthocyanidin intake on renal cancer risk. The intake of delphinidin, peonidin, and petunidin was statistically significantly associated with at least a $30\%$ reduction in the risk of renal cancer, with comparison of the highest vs. lowest quartiles (HRQ4vsQ1 for delphinidin: 0.59; $95\%$ CI: 0.43–0.79; HRQ4vsQ1 for peonidin: 0.68; $95\%$ CI: 0.50–0.93; HRQ4vsQ1 for petunidin: 0.69; $95\%$ CI: 0.50–0.94). However, there was no significant association between the consumption of cyanidin, malvidin or pelargonidin and renal cancer risk. ## 3.3. Additional Analyses The restricted cubic spline model revealed a reduced risk of renal cancer with a higher anthocyanidin intake (Figure 1). There was no statistical evidence for nonlinearity (p for nonlinearity = 0.207). In the subgroup analyses as shown in Table 4, a significant interaction was observed between anthocyanidin consumption and hypertension status ($$p \leq 0.002$$). Specifically, the favorable association between anthocyanidin intake and renal cancer risk was more pronounced in the participants with a history of hypertension than in those without. No significant interaction was observed for other stratification factors including sex, BMI and smoking status (all $p \leq 0.05$). In a sensitivity analysis, there was little change in the findings by excluding individuals with a follow-up of less than two years (HRQ4vsQ1: 0.68; $95\%$ CI: 0.51–0.92). ## 4. Discussion In this post hoc analysis of the PLCO trial, we found that the dietary intake of total anthocyanidins was inversely associated with renal cancer risk, and this association was also true for the subclasses including delphinidin, peonidin and petunidin. Importantly, the association was evident in the dose-response analysis and the sensitivity analysis. In the subgroup analyses, a significant interaction was observed between anthocyanidin consumption and hypertension status. A more favorable association between anthocyanidin intake and renal cancer risk was observed in participants with hypertension. The reason for these findings was not clear. Participants with a history of hypertension may be more likely to have a healthy lifestyle and dietary pattern, which can promote the intake of anthocyanidin-rich foods. No significant interaction was observed for other important renal cancer risk factors including high BMI and smoking behavior. In the subgroup analysis according to the subclasses of anthocyanidins, a significant association was only observed for half of the types of anthocyanidins, including delphinidin, peonidin, and petunidin, which indicated that only part of the subclasses of anthocyanidins have health benefits. According to the literature reviewed, the evidence has accumulated worldwide on the beneficial effects of anthocyanidins on chronic diseases [19,20], including cardiovascular disease [21], diabetes [22], nonalcoholic fatty liver disease [23], and neurological disease [24], as well as various types of cancer. Zhang et al. [ 9] recently reported that dietary intake of total anthocyanidins and of all six subclasses including cyanidin, delphinidin, malvidin, peonidin, petunidin, and pelargonidin were related to a reduced risk of lung cancer in the PLCO cohort. A large meta-analysis of observational studies suggested an inverse association between anthocyanidin consumption and the risk of esophageal cancer [11]. Higher intake of dietary anthocyanidins may also reduce the risk of colorectal cancer [25]. However, the relationship between anthocyanidin intake and renal cancer risk remains unclear, especially given the lack of evidence from prospective studies. Our study, firstly, provided important data on the association between dietary anthocyanidin intake and renal cancer risk from a large-scale prospective American cohort. We also examined the association visually in a dose-response manner and according to gender and subclasses of anthocyanins, with adjustment for potential confounders. Previously, there were only two small case-control studies performed on this topic with no significant association observed [12,26]. The reasons for the inconsistency in the findings between our study and the previous two are unknown. It could be due to the unknown residual confounding, insufficient statistical power, in addition to the selection bias and recall bias imposed by case-control studies. Several mechanisms have been proposed to explain a potential inverse association between dietary anthocyanin intake and cancer incidence. Anthocyanins may influence the composition of the gut microbiome, which may mediate the metabolic benefits of anthocyanins [27]. In a mouse model, Khandelwal et al. [ 28] found that the intake of the anthocyanidins pelargonidin and cyanidin reduced the genotoxic stress induced by environmental toxicants, such as diepoxybutane, urethane and endogenous nitrozation. Li et al. [ 29] suggested that anthocyanins exhibited anticarcinogenic properties by suppressing the proinflammatory, STAT3, and NF-kB signaling pathways and promoting the activity of essential detoxification enzymes. Farhan et al. [ 30] proposed that anthocyanidins could suppress human cancers by trigger copper mediated and ROS-dependent selective cell death of cancer cells. A recent umbrella review summarized that anthocyanin improved plasmatic lipids, glucose metabolism, and endothelial function [31], which all may help in cancer prevention. As with any study of this type, our study had several limitations. Firstly, anthocyanidin consumption was only assessed once at baseline in the PLCO study and dietary information may have changed over time. In addition, anthocyanidin consumption was assessed by a self-administrated FFQ in our study, which was typically prone to response bias. Secondly, although we had adjusted for a wide range of potential confounders, our results could be susceptible to residual confounding as the present study was performed with an observational design. Thirdly, the potential co-linearity between anthocyanidins and other nutrients could mediate the observed associations, which was not determined in this study. Finally, physical activity has been inversely associated with the renal cancer risk [32,33]. Long-term dialysis [34] was a potential risk factor for renal cancer. However, these data were not available in the PLCO study and thus we could not adjust for these potential confounders. ## 5. Conclusions In conclusion, in this large American population, a higher dietary consumption of total anthocyanidins, as well as three subclasses including delphinidin, peonidin and petunidin, was associated with a lower risk of renal cancer. Future cohort studies are warranted to verify our preliminary findings and to explore the underlying mechanisms in this regard.
# Chemerin and Chemokine-like Receptor 1 Expression in Ovarian Cancer Associates with Proteins Involved in Estrogen Signaling ## Abstract Chemerin, a pleiotropic adipokine coded by the RARRES2 gene, has been reported to affect the pathophysiology of various cancer entities. To further approach the role of this adipokine in ovarian cancer (OC), intratumoral protein levels of chemerin and its receptor chemokine-like receptor 1 (CMKLR1) were examined by immunohistochemistry analyzing tissue microarrays with tumor samples from 208 OC patients. Since chemerin has been reported to affect the female reproductive system, associations with proteins involved in steroid hormone signaling were analyzed. Additionally, correlations with ovarian cancer markers, cancer-related proteins, and survival of OC patients were examined. A positive correlation of chemerin and CMKLR1 protein levels in OC (Spearman’s rho = 0.6, $p \leq 0.0001$) was observed. Chemerin staining intensity was strongly associated with the expression of progesterone receptor (PR) (Spearman´s rho = 0.79, $p \leq 0.0001$). Both chemerin and CMKLR1 proteins positively correlated with estrogen receptor β (ERβ) and estrogen-related receptors. Neither chemerin nor the CMKLR1 protein level was associated with the survival of OC patients. At the mRNA level, in silico analysis revealed low RARRES2 and high CMKLR1 expression associated with longer overall survival. The results of our correlation analyses suggested the previously reported interaction of chemerin and estrogen signaling to be present in OC tissue. Further studies are needed to elucidate to which extent this interaction might affect OC development and progression. ## 1. Introduction Ovarian cancer (OC) is the leading cause of death by a gynecological malignancy in the developed world [1]. Due to missing screening methods and the aggressive behavior of the disease, the majority are diagnosed in advanced stages [2]. OC has a five-year survival rate of only $10\%$ when the most common serous type spreads rapidly throughout the peritoneal cavity. Overall, this disease has a poor prognosis, with a five-year survival rate of approximately $50\%$. If diagnosed in earlier stages when the cancer is still confined to the ovary, this survival rate could rise to about $90\%$, but today this occurs in only $20\%$ of patients [2,3]. Increasing evidence suggests that ovarian cancer, like tumors of different origins, is affected by adipokine chemerin [4,5,6]. Chemerin (RARRES2) is a well-described adipokine [7]. It was initially identified as a chemoattractant protein for immune cells that binds to chemokine-like receptor 1 (CMKLR1) expressed by these cells. In the meantime, diverse functions of chemerin have been defined, and chemerin was shown to regulate angiogenesis, adipogenesis, insulin response, and blood pressure [8,9,10,11,12,13]. Although with CCRL2 and GPR1, two further chemerin receptors have been identified, CMKLR1 has been considered to be the most important receptor of this adipokine since chemerin binding to CMKLR1 particularly leads to broad G-protein activation [14]. CMKLR1, located in the cell membrane, is internalized upon chemerin binding. Ligand binding initiates activation of G-proteins and β-arrestin pathways, inducing cellular responses via second messenger pathways such as intracellular calcium mobilization, phosphorylation of mitogen-activated protein kinase (MAPK)1/MAPK2 (ERK$\frac{1}{2}$), tyrosine-protein kinase receptor (TYRO) 3, MAPK14/p38 MAPK and phosphoinositid-3-kinase (PI3K) [14,15]. Emerging studies have proven the role of chemerin in tumorigenesis, whose expression often differs between tumor and non-tumor tissues [4,16]. In most tumor entities, chemerin/RARRES2 is down-regulated compared to normal tissue, e.g., in tumors of the breast, melanoma, lung, prostate, liver, adrenal, and in melanoma, and this decrease of chemerin expression has been suggested to be part of the tumor´s immune escape [4,17]. Estrogens are known to affect the progression of ovarian cancer [18], although to a much lesser extent than breast cancer. These effects are dependent on the expression of estrogen receptors (ERs) α and β. Estrogens activate the proliferation of ovarian cancer cells via ERα, often being overexpressed in this cancer entity [18,19]. Expression of ERβ, which is the predominant ER in the ovary [20], is often down-regulated in OC. ERβ is associated with an improved overall survival (OS) [21,22] in line with in vitro data demonstrating that its activation reduces ovarian cancer cell proliferation and activates apoptosis [21,23,24,25]. There is a relationship between estrogen-related receptors (ERRs) α, β, and γ with various cancer-related genes as well as ERα in ovarian cancer [26]. ERRs interact with ERα and several other nuclear receptors [27,28]. Thereby, among others, a vast number of different genes modulating metabolic processes are regulated, and several different pathways are controlled [29]. ERRα, which has attracted the greatest attention to date, acts as a master regulator of cellular metabolism, thereby also promoting tumor growth [30]. Chemerin was shown to decrease ovarian steroidogenesis via CMKLR1 [31,32] and thus may be protective in hormone-dependent cancers. A tumor-suppressive effect of chemerin was also reported by a recent in vitro study demonstrating chemerin to reduce the growth of ovarian cancer cell spheroids via activating the release of interferon (IFN)α, leading to induction of a broad, IRF9/ISGF3-mediated anti-tumoral transcriptome response [6]. However, a recent Chinese in vitro study reported a tumor-promoting role of chemerin in ovarian cancer cell lines in terms of proliferation via upregulation of programmed death ligand 1 (PD-L1) [5]. On the mRNA level, data on the expression of RARRES2 and CMKLR1 in ovarian cancer tissue have been extensively collected, e.g., by The Cancer Genome Atlas (TCGA) project https://www.cancer.gov/tcga). However, studies based on protein data of both genes in OC are rare. Thus, to further approach the possible role of chemerin and CMKLR1 in this cancer entity, analyses of their protein levels in OC cancer tissue and identification of correlated proteins are necessary. In the current study, protein levels of chemerin and CMKLR1 were assessed by immunohistochemistry of tissue microarrays (TMA), including tissues of 208 ovarian cancer patients. Furthermore, their association with patients´ survival and with the expression of ovarian cancer markers, cancer-related proteins, and components of estrogen signaling pathways was tested. ## 2.1. Tissue Samples In this study, ovarian cancer samples collected in the Department of Pathology of the University of Regensburg were examined. Generally, Caucasian women with sporadic ovarian cancer and available information on grading, stage, and histological subtype from 1995 to 2013 were included. Patients’ clinical data were available from tumor registry database information provided by the Tumor Center Regensburg (Bavaria, Germany). This high-quality population-based regional cancer registry was founded in 1991, and it covers a population of more than 2.2 million people in Upper Palatinate and Lower Bavaria. Information about the diagnosis, course of the disease, therapies, and long-term follow-up are documented. Patient data originate from the University Hospital Regensburg, 53 regional hospitals, and more than 1000 practicing doctors in the region. Based on medical reports, pathology, and follow up-records, these population-based data are routinely documented and fed into the cancer registry (Table 1). ## 2.2. Tissue Microarray and Immunohistochemistry The tissue microarray (TMA) was created using standard procedures that have been previously described [33,34]. From all patients included in this study, an experienced pathologist (FW) evaluated H&E sections of tumor tissues, and representative areas were marked. From these areas, core biopsies on the corresponding paraffin blocks were removed and transferred into the grid of a recipient block according to a predesigned array of about 60 specimens in each of the five TMA paraffin blocks. For immunohistochemistry, 4 μm sections of the TMA blocks were incubated with the indicated antibodies according to the mentioned protocols in the given dilutions (Table 2), followed by incubation with a horseradish peroxidase (HRP) conjugated secondary antibody and another incubation with 3,3′-diaminobenzidine (DAB) as substrate, which resulted in a brown-colored precipitate at the antigen site. An experienced clinical pathologist (FW) evaluated immunohistochemical staining according to localization and specificity (Table 3). For the determination of the staining intensity of ERRα and ERRγ, a score from 0 (negative) to 3 (strongly positive) was used. Since staining intensities for ERRβ were generally lower, a score from 0 to 2 was used. For steroid hormone receptors ERα, nuclear ERβ, and PR, the immunoreactivity score, according to Remmele et al., was used [35]. Expression of proliferation marker Ki-67 using antibody clone MIB-1 was assessed in the percentage of tumor cells with positive nuclear staining. Her2/neu expression was scored according to the DAKO score routinely used for breast cancer cases. EGFR was scored according to Spaulding et al. on a 4-tiered scale from 0 to 3 [36]. For p53 and polyclonal CEA, the “quick score” was used, where results are scored by multiplying the percentage of positive cells (P) by the intensity (I) according to the formula: Q = P × I; maximum = 300 [37]. CA-125 and ERβ were described as positive or negative, irrespective of staining intensity. Chemerin and CMKLR1 cellular staining intensity (non-specific nuclear staining was not considered) was scored on a 3-tiered scale from 1 (weak) to 3 (strong intensity) (Figure 1). ## 2.3. In Silico Analyses To compare the expression of RARRES2 and CMKLR1 in normal ovary, OC, and OC metastases at the mRNA level, the TNMplot webtool (https://tnmplot.com/analysis/) was used to analyze gene chip data from GEO datasets, including 744 OC patients, 46 samples from the normal ovary and 44 OC metastases [38]. The statistical significance of the comparison was determined using the nonparametric Kruskal–Wallis test. To test the association of RARRES2 and CMKLR1 mRNA levels in OC patients with overall survival by means of the webtool KMplot (https://kmplot.com/analysis/index.php?p=service&cancer=ovar (accessed on 2 February 2023)), gene chip data from TCGA and 14 GEO datasets were analyzed. Both mRNA and survival data were available from 2021 OC patients. The following parameters were used for this analysis: splitting of the patients’ collective in a high and a low expression group was performed by choosing the “auto select best cutoff” option; all patient subgroups and treatment groups were included, and biased arrays were excluded. For RARRES2, the Affymetrix ID 209496_at was indicated, and for CMKLR1, the Affy ID 210659_at [39]. ## 2.4. Statistical Analysis Apart from multivariate survival analyses, statistical analysis was performed using GraphPad Prism 5® (GraphPad Software, Inc., La Jolla, CA, USA). The non-parametric Kruskal–Wallis rank-sum test was used for testing differences in the expression among three or more groups. For pairwise comparison, the non-parametric Mann–Whitney U rank-sum test was used. Correlation analysis was performed using the Spearman correlation. Univariate survival analyses were performed using the Kaplan–Meier method. The chi-squared statistic of the log rank was used to investigate differences between survival curves. Hazard ratios were calculated using the Mantel–Haenszel method. A p-value below 0.05 was considered significant. Multivariate Cox regression survival analysis was performed using IBM® SPSS® Statistics 25 (SPSS®, IBM® Corp., Armonk, NY, USA) using the Enter method. ## 3.1. Intratumoral RARRES2 mRNA Levels in Ovarian Cancer and Metastasis Tissues Are Significantly Reduced When Compared to Normal Ovary Given that a sufficient amount of normal ovarian tissues or metastatic tissues could not be obtained, it was decided to use the benefits of open-source gene chip expression data, and it was thereby possible to compare mRNA expression of RARRES2 (coding for chemerin) and CMKLR1 in 744 OC tissues, 46 samples from the normal ovary and 44 tissue samples of OC metastases. This analysis of open-source data using TNMplot webtool (https://tnmplot.com/analysis/) [38] accessed on 15. September 2022 revealed decreased RARRES2 mRNA levels in the OC (Dunn test $$p \leq 0.0002$$) and the metastasis group (Dunn test $$p \leq 0.0646$$) compared to normal ovarian tissue, interpreted as an attempt for evasion from the immune response. Regarding CMKLR1 mRNA levels, only the metastasis samples exhibited a reduced expression (Dunn test $p \leq 0.0001$) of this receptor (Figure 2). ## 3.2. Protein Levels of Chemerin and CMKLR1 in Ovarian Cancer Tissue Both chemerin and CMKLR1 were shown to be widely detectable in OC tissues as assessed on the protein level by means of immunohistochemistry of tissue microarrays (TMAs). Positive staining of chemerin was found in all cases ($32.7\%$ with weak staining, $40.5\%$ moderate, and $26.8\%$ with strong staining). CMKLR1 was also detected in all tumors, among them $22.2\%$ with weak staining, $38.0\%$ with moderate, and $39.9\%$ with strong staining. There was a strong correlation between chemerin and CMKLR1 levels in all tumors (rho = 0.5959, $p \leq 0.0001$), as well as the largest subgroup of serous OC (rho = 0.6285, $p \leq 0.0001$). No significant differences in protein levels of either chemerin or CMKLR1 between G2 and G3 graded tumors, different FIGO stages, or in patients with different nodal statuses were observed. Moreover, the invasion of lymph or blood vessels did not depend on the expression of either protein. ## 3.3. Protein Levels of Chemerin and CMKLR1 in Ovarian Cancer Tissue Subject to Levels of Ovarian Cancer Markers, Cancer-Related Proteins and Components of Estrogen Signaling Pathways Subsequently, mean protein levels of chemerin and CMKLR1 in ovarian cancer subgroups were compared with high vs. low expression of the ovarian cancer markers, cancer-related proteins, and components of estrogen signaling pathways that were analyzed in this study. First, results showed that mean levels of chemerin and CMKLR1 were elevated in ovarian cancers with higher cytoplasmic ERβ expression when compared to the lower expressing subgroup ($$p \leq 0.0143$$ and $$p \leq 0.0133$$, respectively) (Table 3). Mean protein levels of CMKLR1 were increased in ovarian cancer specimens with higher expression of the proliferation marker Ki67 ($$p \leq 0.0304$$). Protein levels of chemerin and CMKLR1 were elevated in the ERRα-high subgroup ($p \leq 0.0001$ and $p \leq 0.0001$, respectively). In ovarian cancers with higher expression of ERRβ, increased levels of chemerin and CMKRL1 ($$p \leq 0.0091$$ and $p \leq 0.0001$, respectively) were observed. CMKLR1 levels were found to be elevated in tumors with higher expression of ERRγ ($$p \leq 0.0031$$). Finally, the mean protein expression of chemerin was elevated in ovarian cancers with higher expression of CMKRL1 ($p \leq 0.0001$), and the mean protein levels of CMKRL1 was increased in ovarian cancer with higher expression of chemerin ($p \leq 0.0001$). No differences in chemerin and CMKLR1 expression levels could be observed between tumor subgroups with different levels of ERα, nuclear ERβ, PR, CEA, CA125, CA72-4, p53, Her2, or EGFR. ## 3.4. Correlation of Chemerin and CMKLR1 Protein Levels with Intratumoral Expression of Proteins Involved in Estrogen Signaling, Ovarian Cancer Markers, and Other Cancer-Related Genes Since chemerin is known to affect ovarian steroidogenesis and was reported to correlate with steroid hormone receptors in breast cancer, correlations of both proteins with protein expression of PR, ERα, ERβ, PR, ERRα, β, and γ were examined first. Furthermore, intratumoral chemerin and CMKLR1 levels were tested for correlation with ovarian cancer markers CA125 (MUC16), polyclonal CEA (CEACAM1,3,4,6,7 and 8), and CA72-4 and with the cancer-related genes EGFR, HER2, Ki-67 and p53. By means of Spearman’s rank correlation analysis, a strong association of chemerin with progesterone receptor (PR) levels (Spearman’s rho = 0.7952, $p \leq 0.0001$) was observed. Chemerin and CMKLR1 were found to be moderately associated with intratumoral protein expression of ERβ, particularly in the largest serous subgroup, which was true both for nuclear (chemerin: rho = 0.2127, $$p \leq 0.0213$$; CMKLR1: rho = 0.2630, $$p \leq 0.0039$$) and cytoplasmic (chemerin: rho = 0.2731, $$p \leq 0.0029$$; CMKLR1: rho = 0.27, $$p \leq 0.003$$) ERβ expression. Notably, a considerable positive correlation between both chemerin and CMKLR1 with the estrogen-related receptors (ERR)s α, β, and γ was observed. Chemerin positively correlated with ERRα (rho = 0.384, $p \leq 0.0001$), ERRβ (rho = 0.3343, $p \leq 0.0001$), and ERRγ (rho = 0.383, $p \leq 0.0001$). CMKLR1 was associated with the expression of ERRα (rho = 0.5207, $p \leq 0.0001$), ERRβ (rho = 0.4239, $p \leq 0.0001$), and ERRγ (rho = 0.4198, $p \leq 0.0001$). Additionally, a weak positive association with cancer marker CEACAM5 (rho = 0.1594, $p \leq 0.0498$) was observed. Expression of the other proteins mentioned above was not significantly associated with either chemerin or CMKLR1 (Table 4). ## 3.5. Correlation of RARRES2 and CMKLR1 mRNA Levels with Expression of Genes Involved in Sex Steroid Hormone Metabolism and Signaling Assessed by In Silico Analysis In silico analyses on the mRNA level (using gene chip data from 744 ovarian cancer patients accessed on the platform https://tnmplot.com) [38] on 15 September 2022 corroborated the positive correlation between chemerin (RARRES2) and CMKLR1 that had been observed on the protein level (Spearman’s rho = 0.26, $p \leq 0.0001$). With regard to genes involved in estrogen signaling, this analysis also substantiated the positive correlation of CMKLR1 with ERβ (ESR2) (rho = 0.33, $p \leq 0.0001$) and of CMKLR1 with ERRα (ESRRA) (rho = 0.33, $p \leq 0.0001$), which was further corroborated using the GEPIA2 platform [40] analyzing datasets from 426 serous OC patients (CMKLR1/ESR2 rho = 0.35 and CMKLR1/ESRRA rho = 0.31, both $p \leq 0.0001$). Using the same platform and data, a positive, albeit weaker correlation of CMKLR1 with ERRβ (ESRRB) (rho = 0.2, $p \leq 0.001$) in serous OC, but not with ERRγ (ESRRG) was found. In contrast to the chemerin protein data from IHC, mRNA levels of the RARRES2 gene in ovarian cancer were not correlated with PGR, ESR2, ESRRA, ESRRB, ESRRG, nor CEACAM5 after analysis of both patient collectives on the mentioned platforms ($p \leq 0.05$ for all). ## 3.6. Survival Analyses Association of chemerin and CMKLR1 in ovarian cancer tissue with overall and progression-free survival. Analyzing the protein data assessed in this study by IHC of TMAs, when OC patients exhibiting different levels of intratumoral chemerin or CMKLR1 were compared with regard to OS by means of Kaplan–Meier analysis, no significant differences were found. Subsequently, the survival of patients with serous ovarian cancers was investigated. However, neither chemerin nor CMKLR1 levels did influence the OS of the patients in this cohort (Figure S1). The levels of these proteins also did not correlate with progression-free survival (PFS), neither when including all ovarian cancer cases nor when analyzing only serous ovarian cancers. Since a weakness of this study is the relatively low number of OC samples, it was speculated that the association between chemerin and CMKLR1 expression with survival could be visible using a larger patient collective. Thus, the online tool kmplot.com providing microarray mRNA and OS data of 2021 OC patients from the Gene Expression Omnibus and The Cancer Genome Atlas [39] was used and accessed on 1 September 2022. This analysis revealed high mRNA levels of RARRES2 in OC tissue to be significantly associated with a shorter OS (HR = 1.32, $$p \leq 5.8$$ × 10−5). In contrast, high mRNA expression of CMKLR1 was associated with longer OS (HR = 0.8, $$p \leq 0.0002$$) (Figure 3). ## 4. Discussion In this study, possible associations between the adipokine chemerin and its receptor CMKLR1 with other proteins involved in steroid hormone signaling were examined in OC tissues and in silico, as the role of these proteins in cancer is yet mostly unclear. It was found that in serous ovarian cancer, both chemerin and CMKLR1 protein positively correlated with ERβ protein expression and with levels of ERRα, β, and γ; additionally, chemerin protein expression was notably associated with that of PR. On the mRNA level, CMKLR1, not RARRES2 mRNA, correlated with ERRβ and γ. These findings thus showed an association of chemerin/CMKLR1 with a nuclear estrogen receptor (ERβ), an important estrogen target gene (PR), and with modulators of estrogen signaling, which plays essential roles in OC. Chemerin has been shown to modulate steroidogenesis, especially secretion of progesterone, in the porcine ovary in both stimulatory and inhibitory ways [41], and it has been proposed that chemerin via CMKLR1 plays a role in the development of polycystic ovary syndrome via inhibition of progesterone secretion [42]. Since progesterone is known to be of importance in OC development, the association between chemerin/CMKLR1 and PR was investigated. In our cohort of 208 patients, a strong correlation between chemerin staining intensity and PR protein expression could be shown. PR expression in OC was found to be associated with a more favorable prognosis [43], and further studies may confirm the role of chemerin herein. It has long been demonstrated that estrogens, their different receptors (ERs), and related receptors (ERRs) are major players in the origin and development of OC in various ways, which led to an investigation of possible associations of chemerin and CMKLR1 with different ERs and ERRs, on which there are few data published to date. One study by Hoffmann et al. indicated an anti-proliferative effect of chemerin partly via ERs [44]. In our study, both chemerin and CMKLR1 levels in tumor tissues positively correlated with estrogen receptor β (ERβ), which could be confirmed on the mRNA level for CMKLR1 and ESR2 by in silico analysis. According to past publications, this could indicate a protective role of chemerin and CMKLR1 similar to ERβ [21,22,23,24]. Concerning ERRs, both chemerin and its receptor positively correlated with estrogen-related receptor α (ERRα), particularly in serous OC tissue, an association being also validated in silico on the mRNA level for CMKLR1. This is in line with a previous study [26], where ERRα was detected abundantly in OC tissues. Also, protein levels of chemerin and its receptor were associated with ERRβ and ERRγ, with a stronger correlation present in serous OC. As these two receptors are indicative of poorer survival [26], the exact mechanisms of chemerin interaction with ERRs and other modulatory factors are to be further elucidated since these findings are contradictory in their putative pro-tumoral effects to the association found with ERβ protein expression and ESR2 gene expression. In silico analyses comparing mRNA expression of the RARRES2 gene in normal ovary, OC, and OC metastases revealed a notable decrease of RARRES2 expression in OC and in metastatic tissue, whereas CMKLR1 RNA levels were considerably reduced in OC metastases only. Low expression of chemerin in tumor tissue is in accordance with findings from other cancer entities and was suggested to indicate a protective role of chemerin in cancer progression. Gao et al., however, described a higher expression of chemerin protein in OC compared to normal tissues. Intratumoral chemerin protein levels were not associated with the overall (OS) or progression-free survival (PFS) of OC patients. In line with our data, chemerin was found to be low-expressed in melanoma and liver cancer, but according to the Human Protein Atlas, it was not prognostic in these cancers [45]. Analysis of open-source mRNA and survival data from 2021 OC patients moreover identified a favorable effect of high CMKLR1 and low RARRES2 mRNA levels on patients’ survival. Taken together, the association of chemerin and CMKLR1 with ovarian cancer prognosis seems to be complex, and factors such as hormonal status or comorbidities such as adiposity, dyslipidemia, or hypertension must be considered. The fact that an association of chemerin or CMKLR1 protein levels with OC survival was not observed, but instead, a significant correlation on the mRNA level of a larger patients´ collective might be explained by the different collective size. Furthermore, mRNA levels do not always correlate with the level of the coded protein. During phases such as cell proliferation or differentiation, post-transcriptional mechanisms may cause deviations from this association. The sampling of tissues for RNA and protein analysis is a further source of variations [46]. Chemerin is a secreted protein and may be taken up by cancer cells. Thus, there are different explanations for why mRNA and protein analysis of chemerin in OC did not always reveal concordant results. The first two arguments also apply to the further proteins analyzed in this study. For CMKLR1, it is important to note that only tumor cell expressed protein was quantified. At the mRNA levels, tumor cells, as well as further cells such as immune cells of the respective tissues, are included and contribute to variations of mRNA and protein data. Differences in protein level assessment of chemerin via immunohistochemistry and RARRES2 gene expression on the mRNA level can be explained by the fact that chemerin is mainly produced by extratumoral tissues, e.g., adipocytes and hepatocytes [8]. Therefore, intratumoral protein levels measured by immunohistochemical staining are expectedly higher than mRNA levels when comparing normal and cancer tissues, and associations of intratumoral chemerin levels with OS and PFS are not mirrored by mRNA gene expression data. Tumors including OC are able to escape the intrinsic anti-tumor activity of the immune system by means of so-called immune evasion strategies [47,48] and cancer immunoediting, often attributed to the interaction of tumor cells with tumor-infiltrating lymphocytes as well as immunomodulatory factors such as PD-L1, CTLA-4, and CXCR4 [49,50]. This might be a possible explanation for the missing effect of different intratumoral chemerin levels on OS or PFS, as well as the decrease of RARRES2 on the mRNA level in the in silico analysis of OC, compared to normal ovarian tissue. In this context, it might be of interest to investigate the composition of tumor-infiltrating lymphocytes and their interaction with chemerin via CMKLR1 in further studies. Limitations of this study are the medium-sized cohort of OC patients and the lack of normal ovarian tissue in the immunohistochemical analysis, which has been compensated for in the additional in silico analyses on the mRNA level. As always in the case of adipokines and the like, it remains to be further determined how serum levels of chemerin must be taken into account, as serum chemerin levels were not available for our OC cohort. ## 5. Conclusions Chemerin protein and its receptor CMKLR1 were demonstrated to be abundantly detectable by immunohistochemistry in ovarian cancer tissues and to positively correlate with intratumoral expression of PR, ERβ and ERRs, corroborating interaction with estrogen signaling pathways as previously suggested. Analysis of publicly available gene expression data demonstrated a significant downregulation of RARRES2 mRNA expression in OC and metastatic tissue, whereas CMKLR1 expression was found to be reduced in metastases only. Tumoral chemerin and CMKLR1 protein levels were not related to OS, but lower RARRES2 and higher CMKLR1 mRNA levels were associated with longer OS. Our data are able to encourage further studies examining the role of the interactions suggested in this study for the development and progression of ovarian cancer.
# Prognostic Value of Sarcopenia and Metabolic Parameters of 18F-FDG-PET/CT in Patients with Advanced Gastroesophageal Cancer ## Abstract We investigated the prognostic value of sarcopenia measurements and metabolic parameters of primary tumors derived from 18F-FDG-PET/CT among patients with primary, metastatic esophageal and gastroesophageal cancer. A total of 128 patients (26 females; 102 males; mean age 63.5 ± 11.7 years; age range: 29–91 years) with advanced metastatic gastroesophageal cancer who underwent 18F-FDG-PET/CT as part of their initial staging between November 2008 and December 2019 were included. Mean and maximum standardized uptake value (SUV) and SUV normalized by lean body mass (SUL) were measured. Skeletal muscle index (SMI) was measured at the level of L3 on the CT component of the 18F-FDG-PET/CT. Sarcopenia was defined as SMI < 34.4 cm2/m2 in women and <45.4 cm2/m2 in men. A total of $\frac{60}{128}$ patients ($47\%$) had sarcopenia on baseline 18F-FDG-PET/CT. Mean SMI in patients with sarcopenia was 29.7 cm2/m2 in females and 37.5 cm2/m2 in males. In a univariable analysis, ECOG (<0.001), bone metastases ($$p \leq 0.028$$), SMI ($$p \leq 0.0075$$) and dichotomized sarcopenia score ($$p \leq 0.033$$) were significant prognostic factors for overall survival (OS) and progression-free survival (PFS). Age was a poor prognostic factor for OS ($$p \leq 0.017$$). Standard metabolic parameters were not statistically significant in the univariable analysis and thus were not evaluated further. In a multivariable analysis, ECOG ($p \leq 0.001$) and bone metastases ($$p \leq 0.019$$) remained significant poor prognostic factors for OS and PFS. The final model demonstrated improved OS and PFS prognostication when combining clinical parameters with imaging-derived sarcopenia measurements but not metabolic tumor parameters. In summary, the combination of clinical parameters and sarcopenia status, but not standard metabolic values from 18F-FDG-PET/CT, may improve survival prognostication in patients with advanced, metastatic gastroesophageal cancer. ## 1. Introduction Esophageal, gastroesophageal and gastric cancers are major causes of cancer-associated morbidity and death worldwide [1]. Despite the ongoing development of novel therapeutic strategies, the prognosis of these entities remains poor, with a 5-year survival rate between 5–$46\%$ [2]. In addition, up to $50\%$ of all patients are diagnosed with advanced stage of disease at the time of initial presentation, precluding curative treatment [3]. Fluorine-18-Fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG-PET/CT) is an established and important tool in the workup of esophageal, gastroesophageal and gastric cancers, providing significant diagnostic and prognostic value amongst these patients [4,5]. Additionally, the CT component allows for the assessment of skeletal muscle and sarcopenia. Skeletal muscle depletion, also known as sarcopenia, is the involuntary loss of muscle mass. This is one of the main components of cancer cachexia syndrome, which is associated with mobility disorder, loss of independence and even increased risk of death [6]. Prior studies have emphasized the influence of nutritional state and body composition on overall survival in various tumor entities [7,8]. Significant weight loss due to dysphagia and altered eating habits is a well-documented clinical problem in patients with gastroesophageal cancers, with a prevalence of up to $79\%$ prior to surgery [8,9,10,11]. Although evidence has shown a significant correlation between sarcopenia and major postoperative complications, the prognostic value of sarcopenia has not been definitively established in patients with advanced disease [12,13]. As a result, it is highly desirable to further investigate potential prognostic factors which could support therapeutic decision making in patients with esophageal and gastroesophageal cancers. Therefore, the aim of our study was to determine the prognostic value of sarcopenia measurements and metabolic activity parameters of primary gastroesophageal cancer in patients with advanced metastatic disease. ## 2. Materials and Methods Between November 2008 and December 2019, 128 patients with primary metastatic esophageal or gastroesophageal cancer who underwent 18F-FDG-PET/CT as part of their initial staging were included from an institutional registry. Patients with primary metastatic disease who were missing staging 18F-FDG-PET/CT ($$n = 35$$) were excluded from the study. Demographic data of the study cohort are provided in Table 1. Different aspects about sarcopenia and PET/CT radiomics in patients with gastroesophageal cancer from the same patient population were evaluated in a different manuscript [14]. This retrospective study was approved by the institutional review board and the need to obtain informed consent from patients was waived (REB# 19-5575). ## 2.1. Imaging Acquisition Whole body 18F-FDG PET/CT was acquired prior to treatment on a Siemens mCT40 (Siemens Healthineers, Erlangen, Germany). Images were obtained from the skull base to the upper thighs. Iodinated oral contrast media was administered for bowel opacification; no intravenous contrast media were used. Patients received 300–400 Mbq (4–5 MBq/kg) of 18F-Fluorodeoxyglucose (FDG) after having fasted for 6 hours, and PET/CT image acquisition was performed after approximately 60 min. Overall, 5–9 bed positions were obtained, depending on patient height, with an acquisition time of 2–3 min per bed position. CT parameters were 120 kVp tube voltage, 3.0 mm slice width, 2 mm collimation, 0.8 s rotation time and 8.4 mm feed/rotation. ## 2.2. Image Analysis and Sarcopenia Measurements The mean, max, and peak standardized uptake values (SUV) and SUV normalized by lean body mass (SUL), were collected from the primary tumor in each patient, using a common commercially available imaging software (Mirada XD Workstation, Mirada Medical, Ltd., Oxford, UK). SUV were obtained manually with a volume-of-interest (VOI) covering the entire tumor volume as defined by PET. Sarcopenia measurements were taken from the CT component of the 18F-FDG PET/CT. Assessment of skeletal muscle mass was performed at the level of the third lumbar vertebra using Slice-O-Matic (TomoVision, version 5.0, Magog, QC, Canada) Hounsfield units (HU) were used to identify skeletal muscle (threshold −29 to 150 HU) (Figure 1). Skeletal muscle index (SMI) was calculated by normalizing the muscle area (cm2) for the subject’s height in squared meters (m2). SMI cutoff values for sarcopenia were used as follows [15]: SMI of 34.4 cm2/m2 in females and SMI of 45.4 cm2/m2 in males. Image analysis was performed by one radiologist with 5 years of experience in oncologic imaging. ## 2.3. Statistical Analysis Summary statistics were used to describe demographics and disease characteristics. Kaplan–Meier (KM) method was used to estimate overall survival (OS) and progression-free survival (PFS) and there $95\%$ confidence intervals (CI). Univariable analysis (UVA) was used to identify potential prognostic factors for OS and PFS, including clinical variables, SUV parameters and anthropometric indices. Parameters with a p-value of <0.05 were included in a subsequent analysis to build a multivariable analysis (MVA). Model performance was quantified and visualized using area under the time-dependent receiver operating characteristic (ROC) curve (AUC), and calculated using leave-one-out cross-validation which served as an internal validation method. All statistical analyses were carried out in R version 4.0.2 [16] and a p-value of <0.05 was considered statistically significant. ## 3.1. Baseline Characteristics of the Study Cohort Overall, 128 patients (26 females, 102 males; mean age 64 ± 11 years, range: 29–91 years) with advanced metastatic gastroesophageal squamous cell carcinoma ($$n = 44$$) and adenocarcinoma ($$n = 84$$) were included in this study. The majority of patients had an ECOG score of 0 or 1 ($22\%$ and $57\%$, respectively) and $21\%$ had an ECOG score of 2 or above. All patients were deemed palliative and underwent either chemotherapy or radiotherapy or a combination of both. A total of $\frac{2}{128}$ patients underwent additional salvage esophagectomy and esophago-gastrectomy. At the time of diagnosis, $\frac{117}{128}$ ($91\%$) patients presented with regional lymph node metastases and concurrent distant metastatic disease to extra-regional lymph nodes, liver, bone, brain or peritoneum. In addition, $\frac{6}{128}$ cases had distant metastases only and in $\frac{5}{128}$ cases the N-stage was undetermined (Table 1). ## 3.2. Image Analysis and Sarcopenia Measurements All primary tumors were associated with increased metabolic activity on staging 18F-FDG-PET/CT with a mean SUVmax of 15.4, ranging from 4.1 to 54.4. Further SUV parameters are summarized in Table 2. Overall, $\frac{60}{128}$ ($47\%$) patients had an SMI score below the cutoff value for sarcopenia, indicating low skeletal muscle mass and poor nutritional status. The mean SMI score in patients with sarcopenia was 29.7 cm2/m2 in females and 37.5 cm2/m2 in males. ## 3.3. Analysis on Survival Prognostication The median ($95\%$ confidence interval) OS and PFS in our cohort was 9.0 (6.9, 10.7) months and 6.0 (4.7, 7.0) months, respectively. OS and PFS showed statistically significant differences with regard to sarcopenia status. Median OS was 9.9 (7.8, 12.4) months in non-sarcopenic patients and 6.8 (4.9, 10.1) months in patients with sarcopenia ($$p \leq 0.032$$). Median PFS was 7.1 (4.6, 9.2) months in non-sarcopenic patients and 5.1 (4.5, 6.8) months in patients with sarcopenia ($$p \leq 0.02$$). Statistical analysis did not show significant differences when comparing patients with squamous cell carcinoma and adenocarcinoma regarding OS and PFS ($$p \leq 0.67$$ and 0.68, respectively). Consequently, further statistical analysis was performed on the entire cohort. UVA using Cox proportional hazards revealed the following parameters as poor prognostic factors for OS and PFS: ECOG performance status ($p \leq 0.001$), bone metastases ($$p \leq 0.028$$) and sarcopenic status (dichotomized sarcopenia score ($$p \leq 0.033$$) and SMI (0.0075)). Additionally, age was associated with decreased OS in the overall cohort ($$p \leq 0.017$$). Metabolic parameters derived from baseline 18F-FDG-PET/CT, however, were not significantly associated with decreases in OS and PFS (Table 3). On MVA, ECOG performance status ($p \leq 0.001$) and bone metastases ($$p \leq 0.01$$ for OS and 0.019 for PFS) remained significant poor prognostic factors for OS and PFS in the overall cohort (Table 4). To this clinical model, we added the sarcopenia status of the patient determined by the SMI score ($$p \leq 0.065$$ for OS and 0.03 for PFS). The combined model (clinical parameters + sarcopenia status) outperformed the model with solely clinical parameters over a clinical course of 33 months, indicating improved OS and PFS prognostication when taking into account the patients’ nutritional status. The results were an OS AUC of 0.76, 0.71 and 0.84 for the combined model compared to 0.7, 0.67 and 0.82 for the clinical model at 6, 12 and 33 months of follow-ups, respectively (Figure 2), and PFS AUC of 0.67, 0.69 and 0.83 for the combined model compared to 0.63; 0.65 and 0.7 for the clinical model at 6, 12 and 33 months of follow-ups, respectively (Figure 3). ## 4. Discussion In our study, we assessed the prognostic value of sarcopenia—an indication for poor nutritional state—in combination with clinical variables and metabolic parameters, derived from 18F-FDG-PET/CTs among patients with advanced metastatic esophageal and gastroesophageal cancers. The main finding of our study was that sarcopenia (low SMI value) is a prognostic marker for poor OS and PFS. Furthermore, improved prognostication of OS and PFS was observed when sarcopenia status was combined with clinical variables as opposed to clinical variables only. However, standard metabolic parameters from 18F-FDG-PET/CTs obtained from the primary tumor were not associated with an overall improvement in outcome prediction. Sarcopenia describes a progressive and generalized loss of skeletal muscle mass and function, which is associated with an increase in adverse outcomes, including a high risk of falls, frailty and mortality [17]. The impact of sarcopenia in cancer patients has been studied across a broad range of malignancies and has been shown to be an independent poor prognostic factor among both patients deemed curative and those undergoing palliative treatment [18,19,20]. A recent study by Gu et al. [ 21] indicates prognostic significance of combined pretreatment body mass index (BMI) and BMI loss in patients with esophageal cancer. However, patients’ BMIs were not found to be significantly different in sarcopenic versus non-sarcopenic patients; neither was it associated with overall survival. This emphasizes the need for advanced screening measurements, besides height and weight, especially since sarcopenia in obese patients is a known phenomenon [18,22]. The impact of sarcopenia in gastroesophageal cancer has been the subject of several previous studies [13,23,24,25,26,27]. A recent study by Sato et al. [ 25] showed significantly worse overall survival rates in a cohort of 48 patients with locally advanced esophageal squamous cell carcinoma who underwent definite chemoradiotherapy—with a 3-year survival rate of 36.95 % vs. $63.9\%$. Similarly, Koch et al. [ 24] investigated the impact of sarcopenia as a prognostic factor for survival in a cohort of 83 patients with locally advanced non-metastatic gastric or gastroesophageal junction (GEJ) cancer, who underwent curative treatment with perioperative chemotherapy and surgery. The authors reported a significantly shorter median survival in patients with sarcopenia compared to non-sarcopenic patients (35 vs. 52 months). Further, perioperative complications occurred more frequently in sarcopenic patients. This is in line with the results of our study, showing a significant decrease in OS (6.8 vs. 9.9, $$p \leq 0.032$$) and PFS (5.1 vs. 7.1 months, $$p \leq 0.02$$) in gastroesophageal cancer patients with primary palliative treatment intent when sarcopenia is present. Notably, sarcopenic patients in the present study showed lower median OS and PFS compared to the prior studies [24,25], which is likely related to the presence of distant metastasis in our cohort. Additionally, our study proposes the combination of standard clinical parameters with imaging-derived sarcopenia measurements to enhance outcome predictions in these patients over a clinical course of 33 months for OS (AUC 0.7 vs. 076 for 6 months; 0.67 vs. 0.71 for 12 months and 0.82 vs. 0.84 for 33 months) and PFS (AUC 0.63 vs. 0.67 for 6 months; 0.65 vs. 0.69 for 6 months and 0.7 vs. 0.83 for 33 months). Only a few studies so far have reported contrasting study results [28,29,30], including a study by Grotenhuis et al. [ 31], who investigated 120 patients undergoing esophagectomies following neoadjuvant chemoradiotherapy for primary esophageal cancer. The results of their study indicate that the presence of sarcopenia is not associated with negative short- and long-term outcomes. Although these studies applied similar measurement techniques for the assessment of sarcopenia (using CT images at the level of the third lumbar vertebra), cutoff SMI values varied between publications. Applying different threshold values for the assessment of sarcopenia is part of an ongoing debate. Whereas some authors used self-developed software tools, recent studies have performed their measurements with frequently used commercially available software, at least partly minimizing the effect of different evaluation approaches. The cutoff values used in the present study are one of the most frequently used within the literature. Further, none of these studies reported the presence of distant metastatic disease in their patient cohort, which may indicate that sarcopenia plays an even more prominent role in outcome prognostication, particularly in advanced metastatic disease. Although dual X-ray absorptiometry (DXA), magnetic resonance imaging (MRI), computed tomography (CT) and ultrasound (US) have previously been investigated for imaging assessment of sarcopenia, MRI and CT are considered the most suitable methods for analyzing quantitative and qualitative changes in body composition [30]. One reason for that might be also that CT and MRI are the most frequently used cross-sectional imaging methods in cancer patients, and thus availability of sarcopenia measurements from this standard-of-care imaging is certainly higher than compared with the other methods. 18F-FDG-PET/CT is an established and routinely used imaging technique for the staging of several different malignancies, including gastroesophageal cancer. 18F-FDG-PET/CT has resulted in a significant improvement in imaging assessment and management of gastroesophageal cancer patients at initial staging, treatment planning, restaging as well as response assessment [32]. In the present study, 18F-FDG-PET/CT was routinely performed to stage patients with esophageal and gastroesophageal cancer. Assessment of sarcopenia was performed on the CT component of this study. Therefore, in the future, this could potentially provide patients with the one-stop shop imaging-derived means to predict OS and PFS as part of routine clinical management. A study by Mallet et al. [ 33] had a similar approach, using staging 18F-FDG-PET/CT for the assessment of sarcopenia in patients with locally advanced esophageal cancer treated with chemoradiation. This is in line with the results of our study, indicating poor prognosis in sarcopenic patients. However, the analysis in our study was performed on a larger sample size than compared to the aforementioned study. Further, we investigated a more homogenous set of patients by only including those with metastatic esophageal and gastroesophageal cancer—adding potential value to the literature, particularly in patients with advanced disease. Notably, it would be highly desirable to obtain clinical, nutritional as well as functional imaging data simultaneously; however, as the results of our study demonstrated, adding standard metabolic parameters to the model with clinical information and sarcopenia measurements does not improve the prediction of OS and PFS. In contradiction, several prior studies demonstrated that metabolic parameters of 18F-FDG-PET/CT do improve overall survival prediction in patients with gastroesophageal cancer [34,35,36]. A systematic review by Pan et al. [ 36] analyzed 39 studies to assess the prognostic value of SUV for survival in patients with esophageal cancer. It has been found that pretreatment SUV measurements can serve as prognostic survival markers in this patient population. However, the SUV threshold was chosen arbitrarily between patients with high and low survival based on the median SUV values in the majority of the studies. Additionally, some studies also used maximum SUV values, reflecting a possible bias. Further, several studies obtained SUV measurements at metastasis sites, rather than the primary tumor, reflecting another difference to our study. Li et al. [ 34] showed that metabolic parameters of sequential 18F-FDG-PET/CTs predict overall survival in esophageal cancer patients treated with chemo-radiation. MVA (which was performed in the aforementioned study), however, revealed that metabolic tumor volume was the only independent prognostic parameter from the initial staging 18F-FDG-PET/CT, whereas SUVmax was not found to be significant, which is in line with the results of our study. This may lead to the notion that besides SUV values, additional more-advanced metabolic markers, such as metabolic tumor volume or total lesion glycolysis, should be included as surrogate parameters for outcome prediction. The following study limitations must be acknowledged. Firstly, there are inherent drawbacks due to the retrospective nature of the study. Secondly, we included patients with both squamous cell carcinoma and adenocarcinoma, leading to relative inhomogeneity of the study cohort. Thirdly, sarcopenia measurements were not performed on post-treatment imaging, since 18F-FDG-PET/CT is not funded for restaging purposes in the healthcare system where our study was conducted. ## 5. Conclusions In conclusion, our study indicates that sarcopenia derived from standard-of-care clinical 18F-FDG-PET/CTs is a prognostic marker of poor outcomes in patients with advanced metastatic esophageal and gastroesophageal cancer. Combining the patients’ nutritional states with clinical variables—but not with metabolic activity parameters from 18F-FDG-PET/CT—resulted in overall improved prognostic ability regarding OS and PFS.
# Xanthine Oxidase Inhibitory Peptides from Larimichthys polyactis: Characterization and In Vitro/In Silico Evidence ## Abstract Hyperuricemia is linked to a variety of disorders that can have serious consequences for human health. Peptides that inhibit xanthine oxidase (XO) are expected to be a safe and effective functional ingredient for the treatment or relief of hyperuricemia. The goal of this study was to discover whether papain small yellow croaker hydrolysates (SYCHs) have potent xanthine oxidase inhibitory (XOI) activity. The results showed that compared to the XOI activity of SYCHs (IC50 = 33.40 ± 0.26 mg/mL), peptides with a molecular weight (MW) of less than 3 kDa (UF-3) after ultrafiltration (UF) had stronger XOI activity, which was reduced to IC50 = 25.87 ± 0.16 mg/mL ($p \leq 0.05$). Two peptides were identified from UF-3 using nano-high-performance liquid chromatography–tandem mass spectrometry. These two peptides were chemically synthesized and tested for XOI activity in vitro. Trp-Asp-Asp-Met-Glu-Lys-Ile-Trp (WDDMEKIW) ($p \leq 0.05$) had the stronger XOI activity (IC50 = 3.16 ± 0.03 mM). The XOI activity IC50 of the other peptide, Ala-Pro-Pro-Glu-Arg-Lys-Tyr-Ser-Val-Trp (APPERKYSVW), was 5.86 ± 0.02 mM. According to amino acid sequence results, the peptides contained at least $50\%$ hydrophobic amino acids, which might be responsible for reducing xanthine oxidase (XO) catalytic activity. Furthermore, the inhibition of the peptides (WDDMEKIW and APPERKYSVW) against XO may depend on their binding to the XO active site. According to molecular docking, certain peptides made from small yellow croaker proteins were able to bind to the XO active site through hydrogen bonds and hydrophobic interactions. The results of this work illuminate SYCHs as a promising functional candidate for the prevention of hyperuricemia. ## 1. Introduction Uric acid has been identified as a recognized or prospective biomarker for various pathological conditions. Lifestyle factors such as high fructose intake, alcohol addiction, and a high-purine diet can all contribute to high levels of uric acid [1]. Hyperuricemia develops when serum uric acid concentrations surpass solubility limits (6.8 mg/dL at physiological pH). Chronic hyperuricemia may raise the risk of gout, which can lead to gout stones, acute arthritis, and other complications [2,3]. The major pathway of uric acid regulation involves the modulation of purine metabolism via xanthine oxidase (XO, EC 1.17.3.2), which is a molybdenum-containing homodimeric cytoplasmic enzyme with a molecular weight (MW) of approximately 300 kDa [4,5]. It predominantly catalyzes the conversion of xanthine and hypoxanthine to uric acid in the human body. Therefore, substances effectively inhibiting XO can be used to prevent hyperuricemia, as exemplified by drugs such as allopurinol, which can provide short-term relief from the pain caused by gout [6]. However, these synthetic drugs often cause a variety of negative effects; for example, allopurinol is highly susceptible to drug cross-reactivity and may cause rashes [7,8]. As a consequence, researchers are trying to create new inhibitors from natural sources that are safe, effective, and less expensive, such as food-derived bioactive peptides with a high XO inhibitory (XOI) effect and minimal side effects. XOI peptides are generally derived from protein hydrolysates by separation, purification, and identification, and include dairy products [9], nuts [10], and aquatic products [11,12]. For example, the peptides YF, WPDARG, ACECD, and FPSV were discovered in the hydrolysates of aquatic products and have been shown to alleviate hyperuricemia [11,13,14]. These bioactive peptides generated from dietary protein hydrolysates are typically easily absorbed and are safer than pharmaceuticals [15,16]. Furthermore, quantitative structure–activity relationships and molecular docking approaches, which are widely used in the screening and discovery of natural small molecule active compounds, have contributed to the illumination of new peptides. Thus, it is important to explore physiologically active peptides from aquatic materials. The small yellow croaker (Larimichthys polyactis, SYC), a Sciaenidae fish, is widely distributed as a benthic warm temperate fish in the coastal waters of China [17], favored by consumers because of its high nutritional value, umami taste, and tender texture [18]. However, certain characteristics of SYC limit its current utilization, such as its small size, susceptibility to perishability, and potent fishy smell [19]. Typically, it is processed into items such as fish cake, fish balls, and canned food [20,21,22]. Therefore, to improve the utilization and economic value of SYC, it is critical to develop higher-value-added products. In this work, we combined traditional testing methods with computer simulation techniques to acquire XOI peptides from SYC muscle. First, papain hydrolysates from SYC were graded by ultrafiltration (UF) technology. Next, the group with the greatest XOI effect was identified and the amino acid sequences of two peptides were obtained. The contribution of the synthesized peptides to the XOI activities of SYC was calculated. Furthermore, molecular docking analysis was used to model the interactions between these peptides and the XO active site and, thus, shed light on the XO inhibition mechanisms of SYC peptides. The XOI effects of the XOI peptides from SYC in vitro were elucidated. ## 2.1. Materials and Chemicals Frozen SYC 9 ± 1 cm in length was obtained from the Zhejiang Xianghai Food Co., Ltd. in Wenzhou, China. Papain (100,000 U/g), Alcalase (100,000 U/g), Neutrase (50,000 U/g), and xanthine oxidase (X1875-5UN, derived from bovine milk) were purchased from Solarbio Co., Ltd. (Beijing, China). We purchased 0.2 M potassium phosphate buffer (pH = 7.4) from Aladdin Biochemical Technology Co., Ltd. (Shanghai, China). Xanthine (≥$98\%$) and allopurinol (chromatographically pure) were purchased from Sigma Aldrich Co., Ltd. (St. Louis, MO, USA). Sodium hydroxide, anhydrous ethanol, and boric acid were of analytical grade and purchased from Sinopharm chemical reagent Co., Ltd. (Shanghai, China). ## 2.2. Pretreatment of Raw Materials The SYCs were thawed overnight at 4 °C and then manually filleted. Next, the filets were boiled to kill enzymes, freeze-dried, and ground into a fine powder (sieved through an 80-mesh sieve). The prepared powder was vacuum sealed and stored at −80 °C before subsequent experiments. ## 2.3. Determination of Raw Materials Protein The determination of protein was carried out using Kjeldahl nitrogen (Kjeltec 8400 Analyzer Unit, Foss Analytical AB, Hoganas, Sweden) according to the Chinese Standard for Food Safety Determination of Protein in Food (GB 5009.5-2016). Briefly, approximately 500 mg of lyophilized sample was digested by the addition of the digestion mixture and 12 mL of concentrated hydrochloric acid at 420 °C for 80 min and then cooled and subjected to distillation with 50 mL of $40\%$ NaOH and auto-titration experiments using 0.1005 M HCl. ## 2.4. Preparation of Papain Hydrolysates from SYC The SYC peptides were prepared in accordance with the research of Hu et al. [ 14] with appropriate modifications. The critical hydrolysis parameters for the preparation of papain SYCH were optimized according to our previous unpublished study. The substrate concentration (1:20 w/v, protein weight basis) was hydrated at 50 °C for 15 min with gentle stirring, adjusted to pH 6.8 with papain at 3000 U/g on a protein basis for 6 h at 50 °C. The mixture was then heated at 95 °C for 10 min to inactivate enzymes and centrifuged at 3950× g for 20 min at 4 °C. The supernatant was collected, concentrated, and freeze-dried to obtain SYCHs. SYCHs were stored at −80 °C before subsequent experiments. ## 2.5. Preparation of Peptide Fractions of SYCHs The enzymatic solution was fractionated through an ultrafiltration centrifuge tube with MW cut-offs of 10 kDa, 3 kDa, and 1 kDa (Pall, New York, NY, USA). The fractions corresponding to three MW distributions, i.e., >10 kDa (UF-1), 3–10 kDa (UF-2), and <3 kDa (UF-3), were concentrated and freeze-dried to obtain peptide fractions, which were stored at −80 °C before subsequent experiments. ## 2.6. Determination of Amino Acids Composition of SYC and SYCHs Amino acids composition was determined using the method reported by Hou et al. [ 23] with some modifications, using an Agilent 1100 high-performance liquid chromatography (HPLC) instrument (Wilmington, DE, USA) coupled with a VWD detector (Agilent Technologies, Inc., Wilmington, DE, USA) and a column of Agilent Zoubax Elicpse AAA (4.6 × 150 mm, 3.5 μm). The determination of 17 hydrolysis AAs of 100 mg SYC and SYCHs was performed with 6 M HCl for 22 h, while *Trp analysis* of 100 mg SYC was performed by alkaline hydrolysis using 5 M NaOH for 20 h. After passing through a 0.22 μm filter, 10 μL of the sample was loaded into the column and eluted at a flow rate of 1.0 mL/min. The temperature was 40 °C, ultraviolet, 338 nm (0–19 min), 266 nm (19.01–25 min); mobile phase A (40 mM sodium dihydrogen phosphate (pH 7.8)); mobile phase B (acetonitrile: methanol: water = 45:45:10). All of the AAs were detected at 338 nm, except Pro, which was detected at 266 nm. The AAs were identified and quantified by authentic AA standards comparing the retention time and peak. ## 2.7. Determination of XOI Activity IC50 In Vitro The XOI activity levels of SYCHs were determined and calculated with the methods reported by Liu and Wei [24,25] with slight modifications. Xanthine was dissolved in 0.2 M potassium phosphate buffer (pH = 7.4) to a concentration of 0.48 mM. In addition, samples (SYCH, UF-1, UF-2, and UF-3) were also dissolved in 0.2 M potassium phosphate buffer (pH = 7.4). Next, 50 μL of sample solution and 50 μL of XO solution (0.07 U/mL) were mixed and incubated at 37 °C for 5 min, then 150 μL of xanthine solution was added to the mixture to continue the reaction. The absorbance of formed uric acid in the samples was monitored at 290 nm with a multifunctional microplate reader (Tecan Co., Ltd., Männedorf, Switzerland). The results were recorded for 10 min. The assay was performed in triplicate. The formula for the calculation of XOI activity is as follows:XO $50\%$ inhibition=(dA/dt)blank−(dA/dt)sample(dA/dt)blank × $100\%$ where (dA/dt)blank and (dA/dt)sample are the reaction rate without and with the test sample inhibitor, respectively. IC50 values were calculated from the mean values of data. XOI activity IC50 (the concentration of active compound required to observe $50\%$ XO inhibition) was determined by plotting the percentage inhibition as a function of concentration of the test compound. ## 2.8. Determination of MW Distributions The MW distributions of SYCH and UF-3, which showed the lowest XOI activity IC50 (detailed in Section 3.3), were determined as described by Bao et al. [ 26] with slight modifications. Gel permeation chromatography (Waters 1515, Waters Co., Milford, MA, USA) with a 2414 differential refractive index detector and an Ultrahydrogel gel permeation chromatography column (7.8 × 300 mm, Waters Co., Milford, MA, USA) was used. The measurement conditions were as follows: 5 mg/mL of the sample (SYCH and UF-3) concentration; mobile phase, 0.1 M sodium nitrate solution; flow rate, 1 mL/min; oven temperature, 40 °C; detector temperature, 40 °C; and the standard, polyethylene glycol. ## 2.9. Identification of the AA Sequence and Molecular Mass of SYCH UF-3 showed the strongest XOI activity (detailed in Section 3.3). Thus, the AA sequence and molecular mass of UF-3 were identified by a nano-HPLC-MS/MS equipped with a Q Exactive Plus mass spectrometer (Thermo Fisher Scientific, Waltham, MA, USA). The samples were injected into a chromatographic analytical column (C18, 75 µm × 25 cm, 2 μm, 100 Å, Thermo Fisher Scientific) at a flow rate of 300 nL/min. The elution conditions were as follows: mobile phase A ($0.1\%$ formic acid in water); mobile phase B ($0.1\%$ formic acid in acetonitrile); and a column temperature of 40 °C. The liquid phase separation gradient was as follows: start from $6\%$ to $25\%$ mobile phase B over 42 min, followed by an increase to $45\%$ mobile phase B over 11 min and an increase to $80\%$ mobile phase B over 0.5 min, then hold at $80\%$ mobile phase B for 6.5 min at a sustained flow rate of 300 nL/min. Peptides were acquired in data d acquisition (DDA) mode with each scan cycle containing one full MS scan ($R = 60$ K, AGC (automatic gain control) = 3 × 106, max IT = 20 ms, scan range = 350–1800 mass/charge) and 25 subsequent MS/MS scans ($R = 15$ K, AGC = 2 × 105, max IT = 50 ms). The mass spectral data were searched by Max Quant (V1.6.6) software. ## 2.10. Physicochemical Properties Prediction of Active Peptides Bioinformatics methodologies depend on data maintained in a variety of databases. We conducted computational investigations using database-based search tools; all programs were executed on 30 August 2022. We predicted the hemolytic properties of peptides using an online prediction website (http://codes.bio/hemopred/, accessed on 30 August 2022). We employed the toxicity prediction tool ToxinPred (https://webs.iiitd.edu.in/raghava/toxinpred/index.html, accessed on 30 August 2022) to predict the potential toxicity of XOI peptides [27]. Meanwhile, we predicted the isoelectric point (pI) of the peptides using Prot Param (http://web.expasy.org/protparam/, accessed on 30 August 2022) [12]. We used Innovagen (http://www.innovagen.com/proteomics-tools, accessed on 30 August 2022) to predict the water solubility of the screened potential bioactive peptides [28]. Additionally, we predicted the potential biological activity of all peptides using the Peptide Ranker (http://distilldeep.ucd.ie/PeptideRanker/, accessed on 30 August 2022), with scores between 0 and 1 [12]. The closer the calculated value was to 1, the higher the activity exhibited by the fragments. ## 2.11. Peptides Synthesis The two peptides (purity > $95\%$) Trp-Asp-Asp-Met-Glu-Lys-Ile-Trp (WDDMEKIW, WW8) and Ala-Pro-Pro-Glu-Arg-Lys-Tyr-Ser-Val-Trp (APPERKYSVW, AW10) from the enzymatic hydrolysates of papain SYCHs identified by nano-HPLC-MS/MS (detailed in Section 2.9 and Section 3.4) were chemically synthesized at Sangon Biotech Co., Ltd. (Shanghai, China). ## 2.12. Molecular Docking and Interaction Visual Analysis We used the docking program Auto Dock Vina to simulate molecular modeling studies in order to further understand the probable binding mechanism of peptides with XO [10]. The X-ray crystal structure of XO from bovine milk with quercetin (PDB: 3 NVY) was downloaded from the RCSB Protein Data Bank (http://www.rcsb.org/pdb) (accessed on 10 September 2022) [28]. The water molecules and all small molecules in XO were removed via Auto Dock tools (v1.5.6). The 3D structures of the inhibitor molecules were built and optimized by minimizing energy in ChemBio3D Ultra 14.0 [11]. Then, the ligands were docked with the XO crystal structure. Peptides and ligand inhibitors were then docked with the PDB structures, giving a Vina score, which is the predicted affinity of the molecule to bind to the PDB structure, calculated in kcal/mol. A more negative score indicates that a ligand is more likely to dock with the enzyme and achieve more favorable interactions [10]. The highest scoring docked model of a ligand was chosen herein to represent its most favorable binding mode predicted by Auto Dock Vina [10]. We carried out functional visualization of the peptides and 3NVY docking results using Pymol2.3.0 [29] to analyze their interaction patterns with binding site residues. ## 2.13. Statistical Analysis All experimental data were analyzed using SPSS 25.0 (SPSS, Inc., Chicago, IL, USA) and Origin 2021 (Origin Lab, Northampton, MA, USA) software. Data are presented as the mean ± standard deviation (SD). One-way analysis of variance (ANOVA) with least significant difference (LSD) procedures was used to determine the significance of the main effects, and $p \leq 0.05$ was considered statistically significant. ## 3.1. The Potential of SYC to Decrease Uric Acid Levels The protein content of SYC was determined to be $88.96\%$ ± $1.40\%$. Table 1 summarizes the 18 AA composition of SYC. SYC is rich in a variety of AAs, with a total amino acid (TAA) content of 824.45 ± 10.50 mg/g, including 327.39 ± 5.41 mg/g of essential amino acids (EAA) for humans, which accounted for $41.09\%$ of the total. The major AAs of SYC protein were Glu ($21.08\%$), Asp ($9.41\%$), Lys ($9.07\%$), and Leu ($8.11\%$). SYC was rich in HAAs ($34.22\%$), AAAs ($9.39\%$), and BAAs ($17.38\%$), indicating that SYC might be a source of uric-acid-lowering peptides. Hydrophobic amino acids (HAAs) (Met, Leu, and Ala), aromatic amino acids (AAAs) (Trp, Phe, and Tyr), and basic amino acids (BAAs) (Lys, His, and Arg) play essential roles in the uric-acid-lowering process of peptides [23,30,31]. A hydrophobic pocket formed by AA residues near the XO active core acts as a critical structural domain that is accessible to peptides with more HAAs [23]. These AAs can bind to XO via hydrophobic interactions, for example, altering its spatial structure and thereby limiting its activity. Furthermore, it has previously been claimed that proteins with charged AAs and AAAs, particularly Glu, constitute a valuable source of active XOI hydrolysis products [30]. Given that XO generates reactive oxygen species (ROS) by utilizing molecular oxygen as an electron acceptor, the hyperuricemia-treating medicine allopurinol also has antioxidant activity [32]. Because of the significance of phenolic and indole groups as hydrogen donors, AAAs exhibit significant antioxidant action. Furthermore, AAs with charged residues interact with metal ions and restrict oxidative activity [33]. The physiological action of the peptides benefits from a decrease in ROS, which may be related to the effect that decreases uric acid [30]. These results support the evidence indicating that SYC is a potential source of uric-acid-lowering peptides. ## 3.2. The Optimal Conditions for the Preparation of SYCH Targeted hydrolysis of endogenous proteins employing proteases is a typical approach for generating peptides with specified active bioactivities [34]. As indicated in Figure 1A, papain was more efficient than Neutrase and Alcalase in hydrolyzing muscle proteins, and papain hydrolysates had the lowest IC50 values (IC50 = 33.40 ± 0.26 mg/mL) for XOI activity, due to variations in the protein sequence in the substrate and the accessibility of the enzyme to the active site. Subsequently, a one-way experiment was conducted on the enzymatic hydrolysis time of papain based on the XOI activity and achieved the best XOI active hydrolysates at 6 h ($p \leq 0.05$), as shown in Figure 1B, which may be because the SYC muscle was more sufficiently hydrolyzed, and more short peptides with high XOI activity were formed as the hydrolysis time increased. *In* general, papain degrades proteins more thoroughly than other endoproteases. Additionally, papain has been identified as one of the most appropriate enzymes, with a low cost per unit activity [35]. Moreover, some research has used papain to create XOI peptides. For example, it was demonstrated that protein hydrolysates derived from bonito hydrolysates prepared with papain exhibited XOI activity [36]. The biological activity of peptides in fish hydrolysates primarily depends on their structural properties such as AA content, sequence, and hydrophobicity [37]. Table 2 shows the AA composition of SYCH, which has a total acid hydrolyzable AA content of 782.73 ± 16.20 mg/g, an EAA content of 288.50 ± 8.57 mg/g ($36.86\%$), and an HAA content of 248.92 ± 6.66 ($31.81\%$). Studies confirmed that HAA facilitates the interaction with hydrophobic targets (e.g., cell membranes), thereby enhancing their bioavailability [13]. Additionally, SYCH is high in AAAs and BAAs, which may play a key role in the peptide’s functional capabilities [30,36]. AAAs have benzene rings in their molecules. An AAA at one end of the peptide segment may be more beneficial for binding of the peptide to the enzyme’s active domain since the presence of a benzene ring structure in these peptide segments is thought to have a substantial XO inhibition rate [11]. These findings imply that SYCHs may have bioavailability and anti-hyperuricemia activity, conducive to the next step of obtaining active peptides. ## 3.3. MW Distribution of XOI Peptides Low-molecular-weight peptides have been shown to have improved bioactivity and a better capacity to penetrate the gastrointestinal membrane [38]. Bioactive peptides have been refined using UF. Figure 2A shows that the SYCHs mostly consist of peptides of different MWs, with the MW distribution being centered below 3 kDa. MW distributions of SYCHs and UF-3 are depicted in Figure S1, and relative peak table of SYCHs and UF-3 MW distributions in Tables S1 and S2, respectively. He et al. [ 4] suggested that low-molecular-weight peptides may be important contributors to the significant XOI activity of XOI peptides. The peptides in SYCHs were fractionated and the obtained fractions were individually tested in terms of their XOI activities. Figure 2B displays the outcomes of the XOI activity of the SYCH and the three fractions after UF. Comparing these four fractions revealed that the UF-3 (IC50 = 25.87 ± 0.16 mg/mL) had the strongest ability to inhibit XO. UF-3 also showed low molecular weight (Figure 2A), which is consistent with the trend of inhibitory activity of XOI. He et al. [ 4] collected and examined eight fractions with different molecular weight distributions from the lyophilized ethanol-soluble fraction powder of tuna protein hydrolysates and found that the fractions of small peptides (<1 kDa) had remarkable XOI activity compared to the original hydrolysates and other fractions. Other studies found that peptides from skipjack tuna below 3 kDa had a greater inhibitory effect on XO than other fractions [34]. Similar to previous findings, our investigation found that the fraction with 3 kDa inhibited XO more than the SYCH and other fractions, indicating that the fraction with 3 kDa has structural properties recognized by XO and thus functions as a substrate for XO. Therefore, we selected UF-3 for the next identification stage. The AA compositions of SYCH and UF-3 (showing the strongest XOI activity) were evaluated by the acid hydrolysis method to confirm whether there is a potential link between AA composition and uric acid decrease. The makeup of the AA group shifted with the change in MW, Glu, Asp, and Lys were the most prevalent AAs in both SYCH and UF-3, as indicated in Table 2. Following UF, the proportions of BAA, AAA, and HAA increased by $2.64\%$, $5.72\%$, and $6.48\%$, respectively, in UF-3 (Table 2). A molecular docking approach was used to mimic the structure–activity relationships of 20 amino acids, 400 dipeptides, and 8000 tripeptides with XO [9,10]. AAA and HAA were shown to be more likely to connect with the critical amino acid residues around the active core of XO, resulting in a substantial inhibitory action against XO. This could be the reason for the higher XOI activity of UF-3 than others. ## 3.4. Identification of XOI Peptides Sequences and Validation of XOI Activity Seven peptides (WDDMEKIW, APPERKYSVW, IADRMQKELT, LNSADLIK, LSNLGIVI, IGALRAVA, and HHTFYNELR) derived from SYC were obtained and their activity values, hemolysis, and toxicity were predicted. Their basic data (AA sequence, MW, PI, predicted activity values, hemolysis, toxicity, and IC50 for XOI activity) are summarized in Table 3. According to Table 3, all of the peptides had a molecular weight between 760 and 1250 and were predicted to be non-toxic. Only WW8 and AW10 were predicted to have higher potential physiological activity, and all were non-hemolytic except LSNLGIVI and LNSADLIK. The results indicated that WW8 and AW10 (the identification results in Figure 3) are non-toxic and non-hemolytic and have potential physiological activity. WW8 and AW10 presented adequate water solubility. Water solubility has been found to have an effect on bioactive peptide absorption, and dissolution is a limiting factor for physiological function performance [39]. Peptides with considerable water solubility may have high biological availability [15,40]. The XOI activities of the peptides were highest in WW8 (IC50 = 3.16 ± 0.03 mM) and AW10 (IC50 = 5.86 ± 0.02 mM), as shown in Figure 4. The results showed a higher XOI effect than peptide ACECD from *Skipjack tuna* hydrolysates, which had XOI activity of IC50 of 13.40 mM [13]. IADRMQKELT and LNSADLIK had no XOI activity at 15 mg/mL, and LSNLGIVI, IGALRAVA, and HHTFYNELR did not even dissolve in water at 3 mg/mL, so these five peptides were neglected for the subsequent experiments. The XOI activity of WW8 and AW10 was significantly higher than that of the other peptides lacking Trp residue. These findings suggest that a crucial component of effective XOI activity is the presence of Trp residues in the peptides [9,10]. Nongonierma [9] reported that Trp inhibited XO by 70.3 ± $1.1\%$ at a concentration of 0.25 mg/mL. Li [10] claimed that relatively lower IC50 values were mainly located in the peptides containing Trp residue, reporting that the IC50 values of peptides WDD, WDQW, PPKNW, WPPKN, and WSREEQE were lower than those of peptides HCPF and ADIYTE, and that these Trp-containing peptides had relatively higher XOI activity. One possible explanation for this is that Trp with an indole group has a similar C6 and C5 ring structure to the drug allopurinol. Hou et al. [ 23] came to the same conclusion and demonstrated that peptides with Trp residue at the C-terminus inhibited XO. Similarly, the WW8 and AW10 had Trp residue at the C-terminus, which was the most critical factor contributing to the inhibitory effect. Furthermore, prior research suggested that peptides with HAA function well as XO inhibitors, because peptides with higher amounts of HAA may be able to access the hydrophobic domain of the XO active center more easily [36,41]. Interestingly, the peptides with high XOI activity all contain these specific residues. WW8 and AW10 were abundant in HAA ($50\%$ HAA residues), including Try, Tyr, Pro, and Ile, which is consistent with prior results on XOI peptides. The higher suppression of XO peptides compared to SYCH implies that peptides generated from SYC, notably WW8 and AW10, could be potential XO inhibitors. Thus, the peptides WW8 and AW10, which are non-toxic and non-hemolytic and have adequate good water solubility and XOI activity, were subjected to molecular docking to explain the relationship between peptides and XO. ## 3.5. Molecular Docking and Visual Analysis Molecular docking simulates and visualizes the binding sites and binding profiles of small molecule ligands to biological macromolecule receptors. Figure 5A,B depicts the interaction between the two peptides (WW8 and AW10) with XO, with binding energies of −7.3 and −7.9 kcal/mol (detailed in Table 4), respectively, indicating a strong binding relationship. Generally, the lower the binding energy for the same docking model, the more stable the complex [23]. The forces created between the peptides and XO (PDB: 3NVY), as well as the interactions and matching bonds, are depicted in Figure 5C and Table 4. The peptides WW8 and AW10 interacted with the protein, and WW8 established hydrogen connections with Ile1190, Ala1189, Leu744, and Gln1201, with hydrogen bond lengths of 3.5 Å, 2.7 Å, 2.0 Å, and 2.4 Å, respectively. WW8 also established electrostatic interaction with His579 and hydrophobic interactions with 24 AAs, including Val1200, Gly1197, Glu1196, Phe1219, Ile1235, Ile1229, Phe1232, Pro1230, His741, Ala1231, Tyr743, Phe238, Phe742, Tyr592, Met1038, Gly1039, Gly796, Gly1039, Met794, Gly795, Ala582, Gln585, Gln1194, and Gly1193. These AA residues appeared around the binding of AW10 to XO, including Arg912 and Met1038 with which they formed hydrogen bonds with lengths of 1.9 Å, 2.1 Å, and 3.4 Å, respectively. Moreover, hydrophobic interactions with 22 AAs were distributed around the binding sites of AW10 in XOD, including Ala582, His579, Gln585, Met794, Gly796, Gly795, Leu744, Tyr743, Tyr592, Gly039, Gln194, Gly193, Gln021, Phe798, Ala1198, Glu1196, Ile1235, Phe1239, Gly1197, Val1200, Phe1232, and Ala1231, indicating that the hydrophobic force is another important factor driving the binding of AW10 to XO. It was hypothesized that although all peptides interact with different AA residues of XO, they all bind to XO mainly through hydrogen bonding and hydrophobic forces, thus inhibiting the catalytic activity of XO. The lower XOI activity of IC50 for WW8 compared to AW10 could be attributed to a greater number of hydrogen bonds and hydrophobic forces between the WW8 and the XO interaction than AW10. ## 4. Conclusions Peptides (WDDMEKIW and APPERKYSVW) with XOI activity were identified after enzymatic hydrolysis and UF separation of SYC proteins, and the IC50 values (IC50 = 3.16 ± 0.03 mM and 5.86 ± 0.02 mM, respectively) of XOI activity were calculated in vitro. These findings were validated by molecular docking of the two peptides chosen for the strongest XOI activity, which highlighted the importance of hydrophobic bonds and hydrogen bonds in the establishment of a stable complex conformation and the resulting inhibitory effect of the peptides. We anticipate that these peptides can be employed to manage hyperuricemia as natural XO inhibitors. Bioactive peptides will continue to constitute an important area of study in the future, with an expanding array of uses in food, medicine, and cosmetics. Although peptides derived from SYC proteins exhibit XOI activity, in vivo experiments and clinical trial data are required to confirm these findings and explain unknown mechanisms. Clinical trial data are also necessary to demonstrate the efficacy of active peptides and ensure their bioavailability and safety profile. These are all objectives that must be fulfilled in the future. In terms of preserving peptide activity, micro- and nano-encapsulation of bioactive peptides may be an effective way to manage their release and avoid degradation in order to optimize their bioavailability and effectiveness. The advancement of oral administration and bioactive peptide delivery introduces both possibilities and limitations to be addressed in future research.
# The Effect of Ginger (Zingiber officinale Roscoe) Aqueous Extract on Postprandial Glycemia in Nondiabetic Adults: A Randomized Controlled Trial ## Abstract Ginger has shown beneficial effects on blood glucose control due to its antioxidant and anti-inflammatory properties. The present study investigated the effect of ginger aqueous extract on postprandial glucose levels in nondiabetic adults and characterized its antioxidant activity. Twenty-four nondiabetic participants were randomly assigned into two groups (NCT05152745), the intervention group ($$n = 12$$) and the control group ($$n = 12$$). Both groups were administered 200 mL of an oral glucose tolerance test (OGTT), after which participants in the intervention group ingested 100 mL of ginger extract (0.2 g/100 mL). Postprandial blood glucose was measured while fasting and after 30, 60, 90, and 120 min. The total phenolic content, flavonoid content, and antioxidant activity of ginger extract were quantified. In the intervention group, the incremental area under the curve for glucose levels decreased significantly ($p \leq 0.001$) and the maximum glucose concentration significantly reduced ($p \leq 0.001$). The extract possessed a polyphenolic content of 13.85 mg gallic acid equivalent/L, a flavonoid content of 3.35 mg quercetin equivalent/L, and a high superoxide radical inhibitory capacity ($45.73\%$). This study showed that ginger has a beneficial effect on glucose homeostasis under acute conditions and encourages the use of ginger extract as a promising source of natural antioxidants. ## 1. Introduction The postprandial blood glucose concentration has been reported as a key factor in glucose homeostasis control, which seems to be effective in preventing the development and progression of long-term diabetes complications [1]. According to epidemiological data, there is an association between cardiovascular and all-cause death and postprandial hyperglycemia status in nondiabetic patients [2]. In addition, the hyperglycemic status combined with clinical parameters can also predict an increased risk of developing diabetes [3]. It has been reported that postprandial glycemia profiles can be influenced by several factors, such as carbohydrate absorption, insulin and glucagon secretion and/or action, and glucose metabolism in different tissues [1]. Although the peak glucose concentration of nondiabetic individuals occurs about 60 min after the meal, the meal composition influences the magnitude and timing of the peak [1]. There is also evidence that, during hyperglycemic conditions, the oxygen free radicals are overproduced, leading to oxidative stress and cellular damage. This oxidative stress has been correlated with the development of diabetes complications [4]. Ginger (Zingiber officinale Roscoe) is a traditional herb belonging to Zingiberaceae family that has revealed beneficial effects on human health [5]. This herb has been used to treat nausea and vomiting, pain, metabolic syndrome, osteoarthritis, and obesity conditions [6,7,8,9,10]. In addition, it has been proposed that ginger possesses antioxidant and anti-inflammatory properties [11,12]. The main classes of the components responsible for ginger’s bioactivities include shogaols, gingerols, zingerone, and zingiberene [13,14]. It has been shown that these bioactive ginger compounds possess antidiabetic properties that are thought to enhance insulin secretion through the modulation of KATP channels [15]. In addition, 6-Gingerol potentiates the glucagon-like peptide 1 (GLP-1)-mediated glucose-stimulated insulin-secretion pathway in the pancreatic beta cell [16]. Another proposed mechanism of action postulates that the possible stimulation of Rab27a GTPase, in isolated islets, may contribute to the exocytosis of insulin-containing dense core granules. Increased Rab27a GTPase may also increase the translocation of the glucose transporter 4 (GLUT4) vesicle to the membrane of skeletal myocytes [16]. Currently, there is promising evidence of the beneficial properties of ginger extract, which seems to be effective in lowering blood glucose levels [17]. According to the Zhu et al. study, ethanolic ginger extract (200 mg/Kg body weight) demonstrated a significant antihyperglycemic effect in streptozotocin (STZ)—diabetic rats—for 20 days [17]. Ginger aqueous extract (500 mg/Kg body weight) significantly reduced blood glucose level after ginger treatment on the 8th day compared with the baseline in alloxan-induced diabetic rats [18]. However, recently published data on human studies have shown conflicting results regarding blood glucose control [19]. In Karimi et al. ’s study, the ingestion of a ginger supplement (four capsules) (3 g/day) for 7 weeks did not significantly change blood glucose in the ginger group (6.5 ± 0.4 mmol/L) compared to the placebo group (6.5 ± 1 mmol/L) [20]. Additionally, in another study, the ingestion of a ginger capsule (1000 mg per day) for 10 weeks significantly reduced the fasting blood glucose by up to $20\%$ in the nondiabetic adult ginger group at the end of the experimental protocol [21]. Conversely, in the Bordia et al. study, the ingestion of 5 g ginger powder (4 g per day) in nondiabetic patients for 3 months did not affect fasting and postprandial blood glucose levels [22]. Among the studies found in the literature focusing on the effect of ginger on blood sugar, few works have been developed on the effect of this herb on postprandial glycemia. Hung and co-workers [2022] demonstrated that a spice mix meal containing ginger significantly reduced postprandial glucose levels in obese and overweight adults [23]. In accordance with the lack of literature concerning ginger’s effect on the glucose response, the main aim of the present study was to investigate the effect of ginger (Zingibre officinalle Roscoe) aqueous extract (0.2 g/100 mL) on postprandial glucose levels in nondiabetic adults. The second aim was to characterize the antioxidant activity of the ingested ginger extract. ## 2.1. Ethical Consideration This clinical trial was approved by the Egas Moniz School of Health and Science Ethics Committee (Project Code 519, approval on 23 November 2016). The participation was voluntary and informed consent was obtained from all participants after receiving oral and written information about the study. Data confidentiality and anonymity were guaranteed through a codification attributed to each participant. The experimental procedure involving humans was carried out according to the Declaration of Helsinki and CONSORT guidelines. This clinical trial is registered on Clinicaltrials.gov (NCT05152745). ## 2.2. Participants and Study Design This randomized controlled clinical trial, blind to the researcher who performed the statistical analysis, was conducted at Campus Universitário Egas Moniz, Monte de Caparica, Portugal. Twenty-four nondiabetic male and female participants between ages 18 and 40 years were selected. After eligibility criteria were confirmed, participants were sequentially numbered and randomly placed in an intervention group ($$n = 12$$) or a control group ($$n = 12$$). The eligibility and inclusion criteria included subjects of both genders, without glucose metabolism alteration (fasting blood glucose < 126 mg/dL or 6.99 mmol/L). Exclusion criteria included subjects who fasted less than 8 or more than 10 h, were under medication for glycemia control, had gastrointestinal symptoms or disease, pregnant or lactating women, and subjects with an allergy to ginger. Participants were asked not to ingest ginger on the day before the intervention. After 8 h fasting, the intervention group performed an oral glucose tolerance test (OGTT), immediately followed by ginger extract administration; the control group performed an OGTT administration alone. ## 2.3. Ginger Extract Preparation The ginger powder (Zingibre officinalle Roscoe) was obtained from a Portuguese company of Indian origin (batch number LI1GIGRNT150012) and stored under standard environmental conditions (21–23 °C, 50–$60\%$ humidity) until needed. Ginger powder was individually weighed (0.2 g each dose) and added to 100 mL water, thus producing the ginger aqueous extract, which was boiled for 10 min. After cooling at room temperature, the ginger extract solution was distributed to each participant. This method was adapted from Wilkinson, J. M. [2000] [24]. The ginger extract obtained was subject to total phenolic and flavonoid content determination, as well as radical inhibition assay. ## 2.4. Intervention Blood samples were collected from each participant after overnight fasting (8 h), using capillary drop blood, before the intervention (t0). The control group ingested an oral glucose solution (75 g of dextrose in 200 mL water) [25] and the intervention group ingested a ginger aqueous extract solution immediately after the oral glucose solution (75 g of dextrose in 200 mL water). Blood samples were collected at 30, 60, 90, and 120 min after glucose solution and/or ginger extract ingestion in both groups. The blood glucose level analysis was performed using a strip for a glucose meter (Onetouch Select Plus Flex), a sterilized lancet, and glucose meter equipment. ## 2.5. Data Collection General characteristics of the participants were collected through a questionnaire, including age and anthropometric parameters (weight, height, and body mass index). A 24 h dietary recall questionnaire was administered to participants the day before the intervention. The 24 h recall was instructed by an investigator to complete the food record. The ingested food quantity was estimated using a picture book. The Food Processor SQL (version 10.5.0) was used in order to obtain total energy (Kcal), total carbohydrates (g), total protein (g), and total lipid (g) mean intake. ## 2.6. Chemical Analysis Folin–Ciocalteu and gallic acid-1-hydrate (C6H2(OH)3COOH·H2O) were from PanReac (Cascais, Portugal). Quercetin dihydrate (C15H10O7·2H2O) was from Extrasynthese (Lyon, France). Anhydrous aluminum chloride, potassium acetate, sodium carbonate, and Tris(hydroxymethyl)amino methane were from Merck (Alges, Portugal). Phenazine methosulfate (PMS), nicotinamide-adenine dinucleotide hydride (NADH), and nitro-blue tetrazolium chloride (NBT) were from Sigma Aldrich (Lisbon, Portugal). All reagents were pro-analysis grade. All absorbance measurements were performed in a Perkin–Elmer (Lisbon, Portugal) Lambda 25. The reagents were weighed on an analytical balance (Sartorius, ±0.00001 g) (Lisbon, Portugal). ## 2.7. Total Phenolic Content Determination The total phenolic content quantification of 7 ginger extract samples was determined according to the Folin–Ciocalteu method [26]. The total phenolic content was expressed as mg gallic acid equivalent (GAE)/L of ginger extract. ## 2.8. Flavonoid Content Determination The total flavonoid content quantification of 7 ginger extract samples was determined according to the Prabha method [26]. The total flavonoid content was expressed as mg quercetin equivalent (QCE)/L of ginger extract. ## 2.9. Radical Inhibition Assay The superoxide anion (O2∙−) scavenging activity of the ginger extract was determined based on the Morais and Alam methods [27,28]. The superoxide anion was generated by reacting phenazine methosulfate (PMS), nicotinamide adenine dinucleotide hydride (NADH), and oxygen, causing a reduction of NBT in Formazan. A volume of 0.5 mL of ginger extract was added to 2 mL of a solution containing NADH (189 μM) and nitroblue tetrazolium (NBT) (120 μM) with Tris-HCl (40 mM, pH = 8). The reaction started after the addition of 0.5 mL of PMS (60 μM). After 5 min of incubation, control absorbance was measured at 560 nm at room temperature. The percentage of superoxide anion inhibition capacity was calculated using the following equation:Inhibition capacity (%)=Absorvance (control)−Absorvance(sample)Absorvance (control)×100 ## 2.10. Statistical Analysis Statistical analysis of the data was performed using SPSS® (Statistical Package for Social Sciences), version 25.0 software. Descriptive statistics data are reported as the mean ± SD (standard deviation) or SEM (standard error of the mean). Repeated measures of ANOVA of mixed type were used to assess the difference between the 2 groups for postprandial blood glucose at different times. After assumption verification, differences between the 2 groups for total energy, total carbohydrates, total protein and total lipid intake, maximum concentration (Cmax), variation of maximum concentration (ΔCmax), and incremental area under the curve (AUCi) of glucose were assessed using the independent samples t-test. The AUCi was calculated using GraphPad Prim (version 7.03) software. All statistical tests were performed at the $5\%$ level of significance. The sample size required for the study was calculated by simulation using G-Power Software version 3.1.9.4 with a statistical significance of $5\%$ for an expected medium to a large effect size of $20\%$. Additionally, a low correlation (0.40) was assumed among repeated measures and a sphericity correction epsilon of 0.65. ## 3.1. Participant Enrollment In accordance with the CONSORT participant sample description, a total of twenty-four participants were enrolled in and completed the study, twelve for each group, as shown in Figure 1. ## 3.2. Participant Characteristics *The* general characteristics of nondiabetic male and female participants are shown in Table 1. A total of 24 participants, 12 subjects in the intervention group (four male, eight female) and 12 subjects in the control group (five male, seven female), completed the study. Participants from both groups did not significantly differ in age ($$p \leq 0.173$$), body mass index ($$p \leq 0.116$$), weight ($$p \leq 0.725$$), or height ($$p \leq 0.386$$). The total nutritional composition of meals at the day before the intervention was analyzed in each participant of both groups. Non-significant differences ($p \leq 0.05$) were observed in carbohydrates and lipids between groups, as shown in Table 2. The total protein mean and total energy intake was significantly higher in the intervention group compared to the control group ($p \leq 0.05$). ## 3.3. Glycemic Response Blood glucose levels were measured during an oral glucose tolerance test (OGTT) in the control and intervention groups, as shown in Table 3. The repeated measures ANOVA of mixed type showed that there was a significant interaction between the independent and the repeated measures factors ($p \leq 0.001$), which means that there are differences in postprandial blood glucose levels between groups, depending on the moment of measurement. Additionally, the differences in blood glucose levels between different measurement times change depending on the group. The intervention group showed a significantly decreased blood glucose incremental area under the curve ($p \leq 0.001$) and variation of blood glucose maximum concentration ($p \leq 0.001$) compared to the control group (Table 4). ## 3.4. Total Phenols, Flavonoid, and Antioxidant Activity The total phenol and flavonoid contents of the ginger extract used in this study are shown in Table 5. The results revealed a high total phenol (13.85 ± 0.1 mg GAE/L extract) and flavonoid (3.35 ± 0.2 mg QCE/L extract) content. Additionally, the ginger extract showed a high inhibitory capacity for superoxide radical scavenging ($45.73\%$) and an IC50 of 15.66 mgGAE/L. ## 4. Discussion The main aim of our study was to investigate if ginger extract improved the postprandial glucose concentration in nondiabetic adults. The findings of our study revealed that the ingestion of ginger aqueous extract (0.2 g/100 mL) improved the glycemic response in nondiabetic subjects compared to the control group. Data analysis showed a significant interaction between the independent and repeated measures factors ($p \leq 0.001$), which means that there are differences in postprandial blood glucose mean values between groups, depending on the moment of measurement. In addition, the results showed that the postprandial glycemia between different moments changed depending on the group. The ginger extract reduced the blood glucose incremental area under the curve (AUCi) in the intervention group (169.75 ± 17.3) compared to the control group (334.43 ± 32.4) ($p \leq 0.001$), and the glucose maximum concentration in the intervention group (7.72 ± 0.28 mmol/L) compared to the control group (9.57 ± 0.43) ($p \leq 0.001$). These results may be associated with the potential properties of ginger’s bioactive compounds, namely the insulin-mimetic action, leading to increased glucose uptake through the upregulation of GLUT4 expression [16]. Furthermore, the results obtained from the postprandial glycemic response during the oral glucose tolerance test allow us to conclude that they are different between groups and suggest a beneficial effect on the postprandial glycemic response after ingestion of ginger extract. According to the literature, the glycemic response depends on the nutritional macronutrient composition of the meals [29]. In fact, in the present study, the average total protein intake on the day before the intervention (89.86 ± 10.63). in the intervention group was significantly ($$p \leq 0.011$$) higher than the control group (56.33 ± 4.85). The effect of protein intake on blood glucose has been studied in the literature using different methodological approaches. Khan et al. [ 1992] showed that the ingestion of 50 g of protein in the form of cottage cheese did not significantly reduce plasma glucose concentration compared with the control group (water alone) for 8 h [27]. In addition, Khoury et al. [ 2010] demonstrated that postprandial glucose peaks were significantly lower following a high-protein meal, compared with a high-carbohydrate meal [29]. Different studies have also evaluated the effect of protein ingestion in glycemic response through blood glucose concentration analysis for 180 min post-meal. The whey protein and milk protein co-ingestion with mixed meals improves postprandial glycemia [28,30]. On the other hand, in Paterson M. et al. ’s study, dietary protein does not seem to influence glycemic control in nondiabetic individuals [31]. In this context, due to the diversity of methods and results in the literature, the influence of protein intake on postprandial glucose is not fully understood. For this reason, although the results showed a beneficial effect of ginger extract ingestion on glycemia, further studies with homogeneous and comparable sample sizes, methodologies, and dietary patterns should be employed. According to the literature, not many clinical trials have investigated the effect of ginger extract on postprandial glycemia. Most studies evaluate the effect of ginger on fasting glycaemia in diabetic patients. Additionally, the findings regarding ginger’s effect on glucose homeostasis seem to be contradictory. A recent meta-analysis that included eight randomized trials, with a total of 454 type 2 diabetic participants, revealed that ginger ingestion did not significantly improve glycaemia response in patients with type 2 diabetes mellitus ($$p \leq 0.16$$). Additionally, this study also showed that HbA1c significantly improved in the participants with ginger ingestion ($$p \leq 0.02$$) from the baseline to the follow-up, suggesting that ginger may have a beneficial impact on glucose control over a longer period of time [20]. Other studies have reported that ginger powder significantly reduces fasting glucose concentration. In a double-blind placebo-controlled randomized clinical trial, type 2 diabetic patients revealed significant differences in serum glucose ($p \leq 0.001$) in the intervention group compared with the control group after 3 months of the intervention (3 g per day of powdered ginger) [32]. Additionally, in Arablou et al. ’s study, the ingestion of 1.6 mg powdered ginger (capsule) per day for 12 weeks significantly lowered ($$p \leq 0.02$$) fasting plasma glucose, compared with the placebo group [33]. The ingestion of 2 g of ginger supplement for 12 weeks in type 2 diabetic patients also reduced the concentration of serum blood glucose ($$p \leq 0.000$$) [34]. In addition to this beneficial effect on glycemia, ginger powder has been shown to decrease serum insulin resistance [35] and significantly improve insulin levels and hemoglobin A1c [33]. In a randomized double-blind placebo-controlled trial with 64 type 2 diabetic patients (28 patients in the ginger group; 30 patients in the placebo group), the ginger supplementation in lower doses (2 g/day) for 2 months had a beneficial effect on insulin levels, but no significant change on fasting blood glucose. The dietary intakes of the participants revealed no significant difference in macronutrient intake between groups, both at the baseline and at the end of the study [36]. The discrepancy in the literature results could be attributed to heterogenicity of the study designs, ginger chemical composition, doses, formulations, extraction processes, and population samples [37]. Nevertheless, according to recent data, the consumption of ginger seems safe and acts beneficially on human health and well-being, highlighting the potential effect the glycemic control [38]. The mechanism of action of ginger extract responsible for glucose homeostasis control effects can be supported by animal and in vitro studies. The administration of 200 mg/kg of gingerol for 4 weeks significantly potentiates GLP-1-mediated glucose-stimulated insulin-secretion pathway in pancreatic beta cells of treated type 2 diabetic mice, compared to untreated type 2 diabetic mice [16]. The increase in insulin secretion through endocrine hormones can be related to a beneficial effect on plasma glucose concentration regulation. In C2C12 cells, the polyphenol-rich Indian ginger extract increased insulin-stimulated glucose uptake [39]. Moreover, different studies explored several underlying mechanisms promoted by different ginger bioactive compounds, which can play a role in glucose control in peripheral tissues. The [6]-Gingerol increased the glucose-stimulated insulin secretion [16]. This compound upregulated and activated cAMP, PKA, and CREB in the pancreatic islets, which can contribute to the insulin-secretion pathway [16]. In addition, [6]-Gingerol regulated the Rab27a GTPase in pancreatic islets, leading to the exocytosis of insulin-containing dense-core granules [16]. Additionally, S-[8]-gingerol seems to increase the protein level of GLUT 4 in a dose-dependent manner in L6 myotubes [40]. Moreover, our study confirms that ginger aqueous extract possesses a high-antioxidant activity through the free radical scavenging capacity. This finding could be correlated with a high-polyphenolic content observed in ginger extract since, according to the literature, there is a significant correlation between free radical scavenging capacity and total phenolic content [41]. According to Manjunathan et al., the antioxidant properties and phenolic content of ginger aqueous extract could also be attributed to gingerol bioactive compound activity [42]. These findings are in accordance with the Fathi study, in which hydroethanolic extract of ginger demonstrated a good level of DPPH scavenging activity and total phenolic content per gram of dry extract [43]. The bioactive compounds identified in ginger, namely [6]-gingerol, [8]-gingerol, [10]-gingerol, and [6]-shogaol showed important scavenging activities with IC50 of 26.3, 19.47, 10.47, and 8.05 µM against DPPH radical and with IC50 of 4.05, 2.5, 1.68, and 0.85 µM against superoxide radical, respectively [44]. Since hyperglycemia induces free radical formation, including the superoxide anion [4], the administration of ginger extract may also contribute beneficially to oxidative damage prevention through its high inhibitory capacity for superoxide radical scavenging ($45.73\%$). Limitations of this study include the unblinded design regarding investigators and the study participants, which was not possible given the nature of the study. The authors did not evaluate the plasma insulin concentration and plasma glucagon-like peptide (GLP-1), which are important in analyzing the effect of ginger extract on GLP-1 and insulin secretion, allowing us to understand its mechanism of action. Additionally, it would be interesting to test other ginger aqueous extract doses in order to explore the eventual postprandial glycemia ginger extract dose-dependence. Further research should be undertaken with a larger sample size and performed over a longer period as part of a mixed-meal daily intake, in order to verify the effect of ginger extract in the long term. ## 5. Conclusions The current study indicates that the ingestion of ginger (Zingiber officinale Roscoe) aqueous extract (0.2 g/100 mL) reduces blood glucose incremental area under the curve and postprandial maximum glucose level variation in nondiabetic subjects. In addition, ginger extract possesses substantial antioxidant activity through free radical scavenging activity. The present study contributes to the support of the beneficial properties of ginger (Zingiber officinale Roscoe), suggesting that this herb extract may be effective against hyperglycemic status in nondiabetic subjects.
# Cost-Effectiveness of Prolonged Physical Activity on Prescription in Previously Non-Complying Patients: Impact of Physical Activity Mediators ## Abstract In Sweden, physical activity on prescription (PAP) is used to support patients in increasing their levels of physical activity (PA). The role of healthcare professionals in supporting PA behavior change requires optimization in terms of knowledge, quality and organization. This study aims to evaluate the cost-effectiveness of support from a physiotherapist (PT) compared to continued PAP at a healthcare center (HCC) for patients who remained insufficiently active after 6-month PAP treatment at the HCC. The PT strategy was constituted by a higher follow-up frequency as well as by aerobic physical fitness tests. The analysis was based on an RCT with a three-year time horizon, including 190 patients aged 27–77 with metabolic risk factors. The cost per QALY for the PT strategy compared to the HCC strategy was USD 16,771 with a societal perspective (including individual PA expenses, production loss and time cost for exercise, as well as healthcare resource use) and USD 33,450 with a healthcare perspective (including only costs related to healthcare resource use). Assuming a willingness-to-pay of USD 57,000 for a QALY, the probability of cost-effectiveness for the PT strategy was 0.5 for the societal perspective and 0.6 for the healthcare perspective. Subgroup analyses on cost-effectiveness based on individual characteristics regarding enjoyment, expectations and confidence indicated potential in identifying cost-effective strategies based on mediating factors. However, this needs to be further explored. In conclusion, both PT and HCC interventions are similar from a cost-effectiveness perspective, indicating that both strategies are equally valuable in healthcare’s range of treatments. ## 1. Introduction Globally, non-communicable diseases contribute to more than $70\%$ of total deaths [1], with cardiovascular diseases as the most common cause of death and metabolic risk factors considered the most prominent for the global burden of disease [1,2]. Regular physical activity (PA) provides a basis for positive health effects, including the prevention and treatment of a plurality of non-communicable diseases [3,4,5]. However, only a minority of all adults reaches the internationally recommended PA level, including 150 min of moderate-intensity PA or 75 min of vigorous-intensity PA per week [6,7]. The economic burden of physical inactivity to societies around the world is substantial [8]. Although several PA interventions are considered cost-effective, there are factors complicating the interpretation of results in published research, such as short time perspectives, the measurement of single treatment effects only, the variability of interventions in different population groups and a lack of cost estimates and savings in the cost-effectiveness analyses [9,10,11]. The physical activity on prescription (PAP) method used in Swedish healthcare by licensed healthcare professionals includes patient-centered counselling, individualized PA recommendations with a written prescription and individualized structured follow-ups. From the patient’s perspective, it seems crucial to individualize all parts of the PAP treatment in order to reinforce behavior changes towards increased PA [12,13,14]. A systematic review of Swedish PAP found a high level of evidence that physically inactive patients in the healthcare setting increased their PA levels [15,16]. Previous studies of PAP have evaluated the effects of shorter interventions, but do not provide guidance on how healthcare providers should act when patients do not reach sufficient levels of PA within this time frame. Hence, there is a need for further studies on long-term PAP interventions with longer follow-up periods [17,18]. Lifestyle change is usually an ongoing process that takes several years [19] and is affected by mediating factors associated with increased PA, such as enjoyment, outcome expectations and confidence in succeeding in changing the PA level [20,21,22]. These factors, defined as intervening causal variables, are important in creating a cause–effect pathway between an intervention and PA [23,24] and could optimally be part of the patient-centered work of tailoring interventions with different levels of support. Immediate rewards of PA (e.g., enjoyment) predict long-term adherence to the PA, whereas delayed rewards (e.g., health benefits) do not [25]. Therefore, it is likely that those who experience high enjoyment do not need any support at all to adhere to PA, and the lower the experienced enjoyment is, the greater is the need of support for sustainable PA. Outcome expectations represent the belief that a behavior change, e.g., increased PA, will lead to a certain outcome [26]. Although not consistent across studies, outcome expectations are considered important in predicting PA behavior [27,28]. Confidence, or self-efficacy expectations, is described as the confidence in one’s capability to change one’s behavior (e.g., PA) [29]. Having confidence in the readiness to change the PA level has been shown to be strongly associated with PA [30]. In this study, the mediating factors of enjoyment, outcome expectations and confidence were measured and have been described in detail previously [31,32]. As behavior changes take time [19], the question is how healthcare providers should act when the desired effect on PA levels is not achieved after a certain period of time, even though the patient is motivated to continue with PAP. As far as we know, there are no previous studies showing what healthcare should do when a lifestyle intervention has failed, which, according to the literature, is a common situation [17,33,34]. Economic evaluations of health interventions compare the costs and consequences of different strategies in order to provide decision-makers with information regarding choices affecting health and the use of resources. Traditionally, these analyses provide answers as to which method is most cost-effective for the average patient. However, recently updated international guidelines on the reporting of health economic evaluation results, known as the Consolidating Health Economic Evaluation Standards or CHEERS statement, include new recommendations on subgroup analyses, acknowledging that heterogeneity among patients means that strategies might be cost-effective for specific groups while not for others [35]. The main aim of this study is to evaluate the cost-effectiveness of a three-year prolonged program of enhanced PAP support delivered by a physiotherapist (PT) compared to continued (standard) PAP treatment at the healthcare center (HCC) for patients who remained insufficiently physically active after a prior six-month period of PAP treatment in a primary healthcare setting. A secondary aim was to explore whether enjoyment, expectations and confidence have potential in identifying cost-effective strategies on a subgroup level. ## 2.1. Study Design and Study Population This cost–utility analysis was based on a randomized controlled trial (RCT) [31] of PAP treatment conducted with two intervention arms: one PT group and one HCC group. The time horizon was three years, and the analysis was performed from both a healthcare and a societal perspective. The study was approved by the Regional Ethical Review Board in Gothenburg, Sweden (ref: 529-09). The present analysis forms part of a long-term follow-up study including 444 patients, which has been described previously [32,36]. Out of these patients, 190 patients did not achieve the internationally recommended minimum PA level after six months of PAP treatment, and were thus included in this study. These 190 patients were living in an urban area of Gothenburg, Sweden. The patients were 27–77 years of age, and had at least one metabolic risk factor (Table 1). Before inclusion in the study, they received standard PAP treatment for six months during 2010–2014 at one of 15 designated healthcare centers in Gothenburg. At the six-month follow-up, $56\%$ of the 190 patients had increased their PA level to some extent, but none of the included patients reached a sufficiently high PA level according to the internationally recommended minimum of 150 min/week. PA level was assessed via two questions regarding moderate- and vigorous-intensity PA during the past week. The patients agreed, orally and in writing, to participate in the RCT at the six-month follow-up, and were then randomized to either enhanced PAP treatment provided by a physiotherapist (PT group, $$n = 98$$) or continued ordinary PAP treatment delivered by nurses at the healthcare center (HCC group, $$n = 92$$). Randomization was based on block randomization, with an automated computer-based stratification of age, sex and BMI. Each patient was then contacted by the PT or HCC for further intervention. A more detailed description of the study population has been published previously [31]. ## 2.2. Intervention The PA and PAP interventions were offered to the patients according to the Physical Activity in the Prevention and Treatment of Disease (FYSS) handbook and the concept of the Swedish PAP model [37,38]. The intervention is described in detail elsewhere [31]. In the HCC group, PAP treatment was provided by nurses, whose area of expertise was nursing, who were trained on the health effects of PA and on treatment with PAP. The treatment included an individualized dialogue concerning PA, an individually dosed PA recommendation including a written prescription and an individually adjusted follow-up. The majority of the patients received continued PAP treatment at follow-ups 2–3 times a year during the intervention period. The physiotherapists, whose area of expertise was work physiology, who provided treatment in the PT group, were also educated in PAP treatment. The PT intervention included the first two individually adapted parts described for the HCC group—that is, the individualized dialogue and the individual PA recommendation. The third part (the follow-up) differed between the two interventions, and in the PT group, this was arranged via a fixed follow-up schedule. This schedule contained a total of ten follow-up sessions during the three-year intervention: six during the first year of intervention, three during the second year and the final one at the three-year follow-up. The PT group also received five additional aerobic physical fitness tests during the intervention period, using an ergometer bicycle. The results from the physical fitness test formed the basis for a continuing dialogue with the patient concerning PA and for an individual dosage of PA regarding frequency, duration and intensity, recorded in a written prescription. ## 2.3. Measurements The patients’ own costs, health-related quality of life (HRQOL), healthcare resource use and absence from work were measured at baseline and at the one-, two- and three-year follow-ups. Costs were estimated based on data from the follow-up questionnaires and administrative sources, as described below for each type of cost included. Unit prices used for estimation are summarized in Table 2. Costs were expressed as 2018 prices and a yearly discount rate of $3\%$ was applied. HRQOL was measured by the Swedish version of the Short Form 36 (SF-36 Standard Swedish Version 1.0) [39], transformed to quality-adjusted life years (QALYs) with SF-6D [40] and a UK tariff [41]. The UK tariff is the one commonly applied also to a Swedish population as there are no Swedish tariffs available. Details on the HRQOL in both groups at each time point are available in Supplemental Table S1. Estimations of the cost-effectiveness are presented both from a healthcare perspective and from a societal perspective. The healthcare perspective includes the healthcare resource use in terms of intervention costs as well as costs for visits to primary care or hospital. For the societal perspective, of which healthcare resource use forms a part, individual expenses for PA, production loss due to sick leave and the time cost of exercise are added. The amount of healthcare resource use in outpatient care was based on the self-reported number of visits to primary healthcare centers and hospitals stated in the yearly follow-up questionnaires. The number of visits to the physiotherapist in the PT group was reported from the administrative source in the study. The costs for all healthcare resource use were estimated based on unit costs differentiated by professions according to standard production prices negotiated for the trade of healthcare between county councils [42] and stated in 2018 prices. Individual expenses related to PA, such as the costs of equipment or transportation, were reported by the patients in the yearly follow-ups. Patients stated their expenses for the last month, which were then multiplied by 12 to estimate yearly expenses. Since different individuals had entered the interventions in different years, all expenses were converted to 2018 prices using the Swedish consumer price index (CPI). Conversion to USD was based on the mean exchange rate on 1 January 2018 (1 USD = 8.78 SEK). The cost of increased exercise time was estimated based on the experience of exercise time in comparison to the experience of leisure activity forgone and of household work [43]. The mean net salary in Sweden was used in the estimation [44]. Time spent on exercise was measured with the International Physical Activity Questionnaires [45]. When experience of PA time was rated higher than leisure activity forgone, there was no time cost. When experience of PA was rated lower than household work (cleaning), the time cost was set to the same as for half net salary. When experience of PA was rated in between the experience of household work and that of leisure activity forgone, the cost was set to the part of the half net work salary that corresponded to the relative position between experience of household work and leisure activity forgone. Individuals were asked about the amount of sick leave from paid work in the yearly follow-ups, and their answers were then converted to full days of absence from work. Each full day of sick leave was then valued in accordance to the human capital approach, based on average wages including payroll taxes [42,46]. Production loss was only estimated for those who stated that they were absent from paid work. ## 2.4. Mediating Factors for Increased PA Based on the positive relationship between PA and health, mediators for increased PA can also be seen as mediators for improved health. Enjoyment was measured using the Physical Activity Enjoyment Scale (PACES) [47], modified by Motl et al. [ 48], including 16 positively or negatively worded items rated on a 5-point Likert scale (1: Does not apply at all, 5: Truly applies). The negatively worded items were reverse-scored, and the responses were added to a score that ranged from 16 to 80. Outcome expectations were assessed with the Outcome Expectations for Exercise-2 Scale (OEE-2) [49,50], including 13 positively or negatively worded items also rated on a 5-point Likert scale (1: Strongly agree, 2: Strongly disagree). The negative OEE items were reverse-scored, and the numerical ratings for each response were summarized and divided by the number of items where a highly valued outcome expectation from the patient gave a low total score. Confidence (the readiness to change PA level) was measured via a 100-mm visual analogue scale (VAS) with the question “How confident are you about succeeding with changing PA level?” [ 51,52]. The VAS line was anchored at each end with words that described the minimum (not at all) and maximum (very) extremes. The mediating factors have been described in detail previously [32]. ## 2.5. Health Economic Analysis Methods In a cost–utility analysis, i.e., a cost-effectiveness analysis with QALY as the outcome measure, costs and effects for at least two alternative treatments are compared in terms of their costs and effects, resulting in an incremental cost-effectiveness ratio (ICER). Here, the costs included for each treatment were actual healthcare resource use, intervention costs, individual expenses related to PA, estimations of production loss due to work absence and individualized time cost for PA. The effect was measured in terms of changes in HRQOL expressed as quality-adjusted life years (QALYs). The cost-effectiveness of the PT group compared to the HCC group is presented in terms of the incremental cost-effectiveness ratio (ICER), which represents the cost of achieving one additional QALY when applying PAP supported by PT compared with continued PAP by HCC. This is expressed by ICER=CostPT group−CostHCC groupQALYPT group−QALYHCC group To include the mediating factors in the analysis, patients were divided into two subgroups for each factor: the half who, at the start of the study, experienced the lowest versus the highest enjoyment, outcome expectations and confidence, respectively, according to the median value in each of the measurements. ICERs were then estimated comparing the costs and effects of the PT intervention compared to HCC treatment for all subgroups, respectively, following the below example for the patients reporting high enjoyment (≥58). The corresponding ICERs were then estimated for low enjoyment (<58), high confidence (≥55), low confidence (<55), high expectations (<2.08) and low expectations (≥2.08). ICER=CostPT high enjoyment−CostHCC high enjoymentQALYPT high enjoyment−QALYsHCC high enjoyment Bootstrapping was performed to acknowledge uncertainty in both costs and effects. This procedure takes the variance in the trial data into account by repeatedly drawing random samples (of the same size as the original) with replacements of costs and effects from the two groups. In this case, 1000 new samples were drawn. Using the net monetary benefit method, QALYs are then replaced by varying willingness-to-pay (WTP) levels for gaining a QALY, in this case ranging from USD 0 to USD 1,000,000. The results of this analysis are presented as cost-effectiveness acceptability curves (CEACs) (Figure 1) showing the probabilities for the PT treatment to be the most cost-effective choice at different WTP thresholds [53]. When the curve is above the 0.5 line (on the vertical axis), this means that PT is more likely than HCC to be the most cost-effective choice for the WTP on the horizontal axis. All randomized participants were kept in their original study groups. For missing data needed to estimate costs and effects, stochastic imputation (by using a single dataset from multiple imputation) was performed based on the assumption that data were missing at random. All analysis was performed on the imputed dataset. For the subgroup analyses on mediating factors, only complete cases on each mediator, respectively, were used, and patients with costs more than three standard deviations from the mean were excluded. ## 3. Results At the three-year follow-up, $70\%$ of the patients in the PT group ($$n = 69$$) and $66\%$ of the patients in the HCC group ($$n = 61$$) attended. Of the patients attending the follow-up, $77\%$ ($p \leq 0.001$) of the PT group ($$n = 61$$) and $66.1\%$ ($p \leq 0.001$) of the HCC group ($$n = 59$$) had increased their PA level and $44.3\%$ vs. $35.6\%$ had achieved the public health recommendation of ≥150 min of moderate-intensity PA per week. There were no significant differences in PA level between the groups at the three-year follow-up ($$p \leq 0.55$$). In the PT group, the incremental QALY gain per participant compared to the HCC group over three years was 0.016, see Table 3 below. From the societal perspective, the average cost per participant amounted to USD 13,488 in the PT group and USD 13,219 in the HCC group. From the healthcare perspective, the corresponding costs were USD 2685 in the PT group and USD 2150 in the HCC group. According to these costs and effects, the resulting ICER was USD 16,771 per additional QALY gained from the societal perspective and USD 33,450 per additional QALY gained from the healthcare perspective for the PT group compared to the HCC group. Based on bootstrapping, taking the variability in the sample into consideration, cost-effectiveness acceptability curves (CEAC) were produced (Figure 1). In order for PT to be more likely than HCC to be cost-effective for the whole sample, the willingness to pay for a QALY needed to be higher than USD 57,000 when considering the societal perspective and higher than USD 22,000 when considering the healthcare perspective (Figure 1). This can be related to a willingness to pay of USD 57,000 for a QALY (corresponding to SEK 500,000, a threshold value commonly referred to in Sweden). Cost effectiveness scatterplots for the CEACs are available in Supplemental Figure S1. In a second step, after splitting the sample into high/low on the mediating factors enjoyment, outcome expectations and confidence, CEACs were produced for these subgroups as well. ## 4.1. Main Outcomes The main aim of this study was to evaluate the cost-effectiveness of a three-year prolonged program of enhanced PAP support delivered by a physiotherapist compared to continued (standard) PAP treatment at the healthcare center for patients who remained insufficiently physically active after a prior six-month period of PAP treatment in a primary healthcare setting. We have tried to shed light on what healthcare should do when a short-term lifestyle intervention is not enough for patients to achieve a desirable PA level. This study does not allow for the analysis of whether the patients “got their chance” and nothing more should be done, but we highlight whether it is most cost-effective to continue the intervention started (HCC group) or to enhance it (PT group). The cost per QALY for the PT strategy compared to the HCC strategy was USD 16,771 with a societal perspective and USD 33,450 with a healthcare perspective. Given a willingness to pay of USD 57,000 for a QALY, the probability of cost-effectiveness for the PT strategy compared to the HCC strategy was 0.5 with a societal perspective and 0.6 with a healthcare perspective. There are no formally established thresholds, but cost-effectiveness ratios of 50,000–100,000 USD in the USA and 32,000–50,000 USD in the UK have often been accepted [54]. The World Health Organization argues that a threshold should simply be seen as an indication of poor, good or very good value for money [55]. There are no general recommendations for the threshold for the probability of cost-effectiveness for a change in routine care, but there are arguments that it should be close to 0.50 [56]. Consequently, it can be concluded that base-case results indicate that PT is cost-effective compared to HCC, but the uncertainty is large. Therefore, it was not possible to draw a definite conclusion about the most cost-effective PAP strategy in this study, as neither of the strategies was clearly superior to the other. The subgroup analyses showed that when enjoyment was high, the HCC intervention was most cost-effective, and when enjoyment was low, the PT intervention was the preferred choice. For confidence and expectations, the result was ambiguous, with small differences or different results depending on perspective. The number of participants in each subgroup was small and the result should be seen as an indicator of the possible impact of mediators. Nevertheless, the analysis showed that it may be worth considering the individual patient’s mediators for increased PA before agreeing with the patient on choosing the optimal intervention. It is probably advantageous to be able to offer either of the methods for increased individualization of the support, where the patient’s preferences are integrated as a vital part of evidence-based medicine [57]. In this study, the two interventions were quite similar in terms of cost-effectiveness. At the same time, the subgroup analyses indicated that they were not equal in effect and cost-effectiveness for everyone. In particular, the subgroup analysis based on enjoyment showed different cost-effectiveness for the respective interventions. Enjoyment has been shown to be the most important mediator for increased physical activity [58,59] and consequently the degree of enjoyment could affect whether extensive support is needed for the individual. In this study, individuals were randomized to either the PT or HCC group. This study suggests that as some individuals seem to benefit more from increased support, cost-effectiveness might be enhanced by screening for enjoyment together with other individual preferences. In clinical use, before the decision about which type of intervention to choose, screening of enjoyment could easily be performed with the PACES short version [60], with four questions instead of 16, as in this study. As the subgroup analysis indicated, the HCC intervention was more cost-effective for patients with higher enjoyment. For patients with lower enjoyment, the PT intervention seemed to be more cost-effective. However, the PT intervention was problematic to implement, with relatively low compliance (with an attendance rate of 5.8 out of 11 follow-ups). Possible explanations could be a lack of time, transport problems or a lack of motivation. It is therefore important that only patients who have the need and motivation for this type of support are offered it. More knowledge is needed on whether the areas of expertise of different professional groups (nurses—nursing, physiotherapists—work physiology), in addition to training in PA and PAP, have significance for the patient’s opportunity to increase their PA. ## 4.2. Strengths and Weaknesses It is not obvious how healthcare professionals should treat patients who have begun the process of changing their PA level but not succeeded in achieving the desired result in the short term. The patients in this study were motivated to continue the process of behavior change, and so we did not consider it ethically acceptable to randomize some of the patients to discontinued treatment—that is, to a “do nothing” group. This means that the analysis was limited to the way in which continued support should be provided, and not to the question of whether it is cost-effective to continue to provide support or not. The study was carried out in a real-world setting, which makes the results generalizable. It also means that the study could not be carried out completely according to protocol. The inclusion of the patients in the study took much longer than planned, and the interventions were implemented on the basis of each patient’s condition and motivation. This resulted in an average of 5.8 PT counselling sessions instead of the planned 11 sessions, indicating that the patient group as a whole was not motivated to participate in such an extensive intervention. As in all long-lasting interventions, there was an increase in missing data over time. This was handled by multiple imputation (but using one dataset instead of a large number of datasets, which is otherwise usual, since the calculation of the probability of cost-effectiveness requires a single outcome at the individual level) based on the assumption that data were missing at random. This could have led to biased results if those who were least physically active were those who had the most dropouts. However, there is no reason to believe that this would have differed between the two groups. The groups differed in terms of the initial QALY level, which might be a concern since a lower level means greater potential for improvement. However, the difference was due to randomization and not to systematic factors. As there are no specific Swedish tariffs available, preference weights for estimating QALYs are based on UK tariffs. This is standard procedure when using the SF6D in Sweden. Many cost-effectiveness analyses of PA interventions have considered the time cost of exercise, but all of them were based on assumptions. As far as we know, this is the first attempt to base exercise costs on empirical data. The study concerns two questions that are rarely answered in the research. Should healthcare continue to support a lifestyle intervention when it is not enough for patients to achieve a desirable PA level in the short term? The question cannot be answered based on our design, but we can see, after the first six-month period, a continued increase in PA, health and HRQOL among the patients at a relatively low cost [32]. This suggests that it can be cost-effective to prolong the intervention. The second question is whether healthcare should start with a small intervention and then increase in magnitude or invest in the most effective (and probably most expensive) intervention directly? This question also cannot be answered with certainty. However, $60\%$ of the original patient group needed, at least temporarily, only a small intervention [36]. The remaining $40\%$, who were not helped during the first 6-month period, had improved PA, health and HRQOL with prolonged intervention, suggesting that an individualized step-by-step increase may be the best use of resources in this case. ## 4.3. Strengths and Weaknesses in Relation to Other Research As far as we know, this is the first study of the cost-effectiveness of prolonged PAP or other prolonged PA interventions. However, earlier studies indicate that it is effective and thus probably also cost-effective to support change over a long period of time [19]. It has been shown that behavior change processes and the establishment of new PA habits are individual and, in many cases, take a long time, often several years. To increase the understanding of the behavior change process and promote behavior change maintenance in PA, more frequent measurements of mediators and outcomes are needed at longer time points [61]. Marcus et al. [ 19] recommend that follow-up should take place for at least 2 years, preferably 5–10 years. ## 4.4. Future Research Lifestyle interventions rarely succeed for all patients, and there is very little research into how healthcare providers should act in cases where they fail to support the patient´s behavior change. This study and health economic analysis is one of very few attempts to shed light on the matter. There is a great need for more research in this important area about prolonged physical activity support in previously non-complying patients. Our study represents only an initial contribution. We believe that the following questions are important to highlight in the continued research in effectiveness as well as cost-effectiveness analysis:[1]How long and to what extent should prolonged support be provided to non-complying patients?[2]How should the prolonged support be organized?[3]Are there individual factors in addition to enjoyment that can be the basis for individualizing the prolonged support? ## 5. Conclusions Both PT and HCC interventions are quite similar from a cost-effectiveness perspective, indicating that both PAP strategies seem to be equally valuable to have in healthcare’s range of treatments. Individual preconditions for being physically active vary and so does the need, concerning time and magnitude, for professional support.
# A New Bloody Pulp Selection of Myrobalan (Prunus cerasifera L.): Pomological Traits, Chemical Composition, and Nutraceutical Properties ## Abstract A new accession of myrobalan (*Prunus cerasifera* L.) from Sicily (Italy) was studied for the first time for its chemical and nutraceutical properties. A description of the main morphological and pomological traits was created as a tool for characterization for consumers. For this purpose, three different extracts of fresh myrobalan fruits were subjected to different analyses, including the evaluation of total phenol (TPC), flavonoid (TFC), and anthocyanin (TAC) contents. The extracts exhibited a TPC in the range 34.52–97.63 mg gallic acid equivalent (GAE)/100 g fresh weight (FW), a TFC of 0.23–0.96 mg quercetin equivalent (QE)/100 g FW, and a TAC of 20.24–55.33 cyanidine-3-O-glucoside/100 g FW. LC-HRMS analysis evidenced that the compounds mainly belong to the flavonols, flavan-3-ols, proanthocyanidins, anthocyanins, hydroxycinnamic acid derivatives, and organic acids classes. A multitarget approach was used to assess the antioxidant properties by using FRAP, ABTS, DPPH, and β-carotene bleaching tests. Moreover, the myrobalan fruit extracts were tested as inhibitors of the key enzymes related to obesity and metabolic syndrome (α-glucosidase, α-amylase, and lipase). All extracts exhibited an ABTS radical scavenging activity that was higher than the positive control BHT (IC50 value in the range 1.19–2.97 μg/mL). Moreover, all extracts showed iron-reducing activity, with a potency similar to that of BHT (53.01–64.90 vs 3.26 μM Fe(II)/g). The PF extract exhibited a promising lipase inhibitory effect (IC50 value of 29.61 μg/mL). ## 1. Introduction Among fresh fruit species, plums (genus Prunus) play a non-determinant role in terms of production and cultivated surfaces, yet it is traditionally found in many areas characterized by temperate climates. It is usually considered a minor stone fruit together with apricot mostly because it is compared to peach and nectarines, which accounts for wider diffusion and growing areas [1]. The species has a complex botanical classification because several species are traced back to the name plum, and its hybridizations, natural and/or induced, are widespread [2]. Domestic or European plums (*Prunus domestica* L.) are generally used for processing into dried plums, while Japanese plums (*Prunus salicina* Lindl.) are almost exclusively used for fresh consumption. The names of the two species highlight the links between the origin and geographical distribution, with the former being more closely associated with the old continent and the latter with the Asian continent. However, these are genetically distant species characterized by different ploidy levels. Myrobalan (*Prunus cerasifera* L.) is a diploid, widespread species in the Mediterranean. This species is credited with the origin of European plums via hybridization with the tetraploid P. Spinosa, and for this reason, myrobalan is considered genetically close to P. domestica, although with different ploidy [1]. It is widely used as rootstock for both plum and apricot trees, more rarely for peach [3,4] thanks mainly to its ability to produce adventitious roots that facilitate its propagation [5,6]. In many traditional fruit-growing areas, where intensive agriculture has not taken over, there are several edible fruiting myrobalan accessions, often small (hence referred to as cherry-plum), selected by farmers and passed down because of the consumption related to the gastronomic traditions of indigenous peoples [7]. For this reason, many studies have focused on characterizing the accessions that are locally grown and highly valued by consumers, often for the taste qualities of the fruits as well as their early ripening time [8]. This approach is very often based on the analysis of morphological data through the adoption of specific descriptor lists studied and approved on a global scale [9]. The morphological traits of flowers, leaves, fruits, and seeds are studied, as well as wood, crown, bud characteristics, tree habit, and phenotypic behavior by the age of maturity and flowering. More recently, it has been proposed to combine these observations with a molecular-scale evaluation model, which, however, becomes effective only when morphological analysis fails to separate different accessions and, in any case, only in the presence of an adequate bank of genetic information [10]. Over the past decade, on an international scale, many research centers have initiated several programs for biodiversity conservation and characterization in accordance with the provisions of the International Treaty on Plant Genetic Resources for Agriculture and Food [11]. This important agreement has given all participating countries a responsibility in the conservation of indigenous genetic resources through the development of national plans in which the main objective is to enable a description of biodiversity with comparable and recognized patterns. Many innovative approaches have been developed for preserving plum biodiversity from the risk of erosion and/or extinction [12]. It is well known that the myrobalan fruits are usually rich in fibers and antioxidants, evidencing an important role in terms of nutritional source [13]; however, cherry plums are not so widely diffused due to the low resistance of the fruit in the postharvest management. Given that germplasm conservation is a substantial contribution to preserving knowledge about all crops, the identification of new resources and their characterization is an indispensable tool for achieving sustainable development targets by reducing the risk of genetic erosion. *Indigenous* genetic diversity is now recognized as a tool for resilience, mitigating the climate crisis due to increased adaptation with less consumption of natural resources, especially soil and water. Genetic diversity can conserve those genes that are potentially useful in strengthening resistance to pathogens or adaptability to stresses [14]. On a global scale, this type of work is also of strategic importance with a view to achieving the Agenda 2030 goals [15]. The conservation of biodiversity of agricultural interest and its morphological and functional characterization is central to SDGs 1, 2, 3, 12, and 14, with the general convergence of all targets toward SDG 13 being related to urgent action on climate change mitigation and adaptation, which, in some ways, underlies all the goals [16]. The attention of consumers toward an increasingly healthy diet has led to an increase in the consumption of fruits, with reference to red fruits not only for their nutritional value but also for their characteristic taste and their well-known health properties [17]. In fact, these fruits, in addition to vitamins and minerals, are rich in compounds with several health properties, mainly phenolic acids such as p-coumaric acid, vanillic acid β-glucoside, protocatechuic acid, and caffeic acid, and flavonoids such as catechin, epicatechin, quercetin and cyanidin-3-O-glucoside [18,19,20]. Metabolic syndrome (MetS) is a complex disorder that is often associated with insulin resistance, high cholesterol and triglycerides levels, and abdominal obesity [21]. The role of oxidative stress in its pathogenesis was proved [22,23]. Găman et al. [ 24] evidenced that in the pancreatic β cells of subjects affected by type 2 diabetes, oxidative stress can reduce insulin secretion and, consequently, glucose uptake. Although research has elucidated many of the mechanisms underlying MetS, its treatment remains a challenge, given the complexity of this disease. For this reason, many research groups are looking for bioactive compounds from food products that can play a preventive role in the onset of this syndrome. Many of these compounds belong to the class of phenols and possess antihypertensive, antihyperglycemic, antihypercholesterolemic, antioxidant, and anti-inflammatory activity, and furthermore, they can produce body weight loss or prevent against body weight gain [25,26]. Among the potentially used preventing approaches to counteract MetS and obesity, the inhibition of α-glucosidase, α-amylase, and lipase was one of the most applied. In fact, the inhibition of carbohydrate hydrolyzing enzymes delays carbohydrate digestion with a consequent hypoglycaemic effect, whereas the inhibition of pancreatic lipase reduces the absorption of ingested fats with a consequent hypolipidemic effect [27,28]. Therefore, herein, we report, for the first time, the pomological characteristics, chemical profile, and nutraceutical properties of different extracts obtained from *Prunus cerasifera* cv ‘Alimena’, a new bloody pulp cultivar from Sicily. The anthocyanin (TAC), flavonoid (TFC), and total phenol (TPC) contents were spectrophotometrically measured. The complete phytochemical profile was assessed using LC-ESI/LTQOrbitrap/MS analysis. A multitarget approach was applied to assess antioxidant activity (ABTS, DPPH, β-carotene bleaching, and FRAP assays). The inhibitory activity against key enzymes involved in MetS was also assessed. ## 2.1. Chemicals and Reagents All chemicals utilized in this study were purchased from VWR International (Milan, Italy) and Sigma-Aldrich Chemical Co., Ltd. (Milan, Italy). ## 2.2. Plant Material The research was carried out on a new accession of *Prunus cerasifera* L. named ‘Alimena’ and identified in an agricultural area in the territory of Alimena, Sicily (Italy) at an altitude of 675 m s/l, (37°41′25″ N; 14°05′53″ E). From the selected natural tree, budsticks have been collected for developing a small experimental orchard with 50 trees grafted onto P. cerasifera myrobalan 29C. The orchard was established in 2015, and standard growing techniques have been applied since then. The planting density was 5 m × 5 m, fully irrigated during summer and managed by adopting spontaneous cover crops during the winter-spring season. Winter and summer pruning was performed yearly; fruit thinning was not necessarily due to a regular crop density recorded at fruit set. ## 2.3. Morphological Description Morphological description was carried out by applying the Guidelines for the Characterization of Plant, Livestock and Microbial Genetic Resources approved based on the National Agricultural Biodiversity Plan of the Italian Ministry of Agriculture [29]. The Guidelines were drafted based on the international descriptors approved by UPOV [30] and contain references to plant traits considered essential for the characterization of plum tree accessions with the aim of distinguishing and defining their uniqueness. Application of the guidelines involves the observation of multiple characters related to the tree and morphological characteristics of vegetative and reproductive organs. Leaves and wood samples were taken from mature trees during the vegetative-reproductive season, and at the same time, all observations of tree habits were recorded. At maturity, a sample of 100 fruits was subjected to morphological analysis (height, width, and thickness of the drupe and stone), as well as to qualitative analysis (skin and pulp color, titratable acidity, and soluble sugar content). For this purpose, fruits were randomly taken from the mass harvested at maturity from 5 plants of the same age. The samples were transported to the laboratory with refrigerated facilities so as not to suffer any damage or spoilage. ## 2.4. Extraction Procedure A total of 500 g of ripe fruits of P. cerasifera were cleaned, stone removed, and homogenized into puree (PA, 380 g) using a food processor. To determine the water quantity, 50 g of puree, once freeze-dried, to give 7.90 g of extract (content of water: $84.2\%$). Three hundred grams of PA were extracted for seven days at r.t. with 600 mL of acetone. The solvent was evaporated to give, after freeze-drying, 35.5 g of dry extract (PB). Then, 2.5 g of PB, after dilution with deionized water (≈15 mL) and then extracted with butanol (8 × 10 mL). The butanolic extracts were evaporated to give, after freeze-drying, 0.63 g of dry material (PC). The aqueous layer was freeze-dried to give 1.8 g of extract (PD). As an alternative extraction method, 20 g of PA was mixed with organic solution [≈120 mL, acetone/methanol/water/formic acid (40:40:20:0.1, v/v/v/v)] [31,32] and the mixture was allowed to stand at 4 °C for 24 h. The crude was filtered and washed with 120 mL of the same solution. The supernatants were combined, evaporated under vacuum at 37 °C, and freeze-dried to obtain 2.51 g of new extract (PF). ## 2.5. Total Phenol (TPC), Flavonoid (TFC), and Anthocyanin Contents (TAC) Total phenol content (TPC) and total flavonoid content (TFC) were evaluated as previously described by Leporini et al. [ 33,34]. The differential pH method was used for total anthocyanin content (TAC) quantification [35]. ## 2.6. LC-HRMS Analysis The LC-ESI/HRMS analysis was carried out on a system of liquid chromatography consisting of a Thermo Ultimate RS 3000 UHPLC coupled online to a Q-Exactive hybrid quadrupole Orbitrap high-resolution mass spectrometer (UHPLC-Q-Orbitrap) (Thermo Fisher Scientific, Bremen, Germany), fitted with a HESI II (heated electrospray ionization) probe, working in both negative and positive ionization mode. In order to allow the chromatographic separation, a Luna C-18 column (RP-18, 2.0 × 150 mm, 5 nm; Waters; Milford, MA, USA), set at a temperature of 30 °C, and a linear gradient, obtained by using mobile phase $0.1\%$ formic acid in water, v/v (A), and $0.1\%$ formic acid in acetonitrile, v/v (B), from 5 to $55\%$ of B, in 20 min, at a flow rate of 0.2 mL/min, were used. 5 μL of each extract, dissolved in water/acetonitrile 1:1 v/v (0.25 mg/mL), were injected by the autosampler. To allow negative and positive ions analysis, the HESI source parameters were set as follows: spray voltage at −2.50 kV and 3.30 kV, respectively; sheath gas at 50 arbitrary units (a.u.); auxiliary gas at 10 and 15 a.u., respectively; auxiliary gas heater temperature at 300 °C; capillary temperature at 300 °C; S-lens RF value at 50 a.u. HRMS and HRMS/MS analyses were carried out by experiments of full (mass range: m/z 150–1400) and data-dependent scan (dd-MS2 topN = 5) at resolutions of 70,000 and 17,500, respectively. A normalized collision energy (NCE) of 30 was used. For each sample, three replicates were performed. Data collection and analysis were carried out by using the manufacturer’s software (Xcalibur 2.2). ## 2.7. In Vitro Evaluation of Antioxidant Potential and Relative Antioxidant Capacity Index The antioxidant activity was assessed by using two radical scavenging test: 2,2-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) and 1,1-diphenyl-2-picrylhydrazyl (DPPH) radical scavenging activity, β-carotene bleaching test and Ferric Reducing Ability Power (FRAP). All the procedures were previously detailed [36]. The Relative Antioxidant Capacity Index (RACI) was applied to evaluate the sample characterized by the highest activity [33]. ## 2.8. Pancreatic Lipase Inhibitory Activity The evaluation of pancreatic lipase inhibitory activity was assessed as previously described by Loizzo et al. [ 37]. ## 2.9. Evaluation of α-Amylase and α-Glucosidase Inhibitory Activity For α-amylase and α-glucosidase inhibitory activity tests, the procedure previously reported was adopted [36]. ## 2.10. Statistical Analysis Linear regression, assessment of repeatability, calculation of average, relative standard deviation (SD), and Pearson’s correlation coefficient (r) were calculated by using Microsoft Excel 2010 software (Redmond, WA, USA). The results were expressed as the means of three different experiments ± SD. The inhibitory concentration $50\%$ (IC50) was calculated by using Prism GraphPad Prism version 4.0 for Windows (GraphPad Software, San Diego, CA, USA). Parametric data were statistically analyzed by using one-way analysis of variance (ANOVA) followed by Tukey’s posthoc test by using Prism GraphPad Prism version 4.0 for Windows. Differences at * $p \leq 0.05$ were statistically significant, while at ** $p \leq 0.01$, they were highly significant. ## 3.1. Morphological Data All data related to the description of morphological traits provided by the applied descriptors are given in Table 1. All data related to the measurements of the fruits, seeds, and leaves, as well as of the fruit quality characteristics, are given in Table 2. The data reported revealed that the ‘Alimena’ accession had morphological characteristics of uniqueness, mainly related to the average fruit size (41.3 g), which is generally larger than that of the other P. cerasifera cultivars known in the Mediterranean basin. The color of the skin and pulp, therefore, give these fruits distinguishable traits that are unmatched in the varietal panorama of myrobalans, confirming, moreover, the character of low resistance to handling and transport. ## 3.2. Total Phytochemical Content (TPC, TFC, and TAC) The pulp obtained from the P. cerasifera ‘Alimena’ fruits was subjected to different extractions with several solvents (acetone, butanol, water, methanol, etc.) to evaluate the impact of the solvent on the phytochemical content. The presence of several compounds with different structures and polarities can drastically affect their solubility [38]. Polar solvents (MeOH, EtOH, and water) were used for the isolation of polyphenols and glycosides from plants. The most common ones are mixtures of water with ethanol and methanol. Ethanol is a good solvent for polyphenol extraction and is safe for human consumption, whereas acetone and ethyl acetate have been used for the extraction of medium molecular weight metabolites, such as terpenoids and flavonoid aglycones [39]. The PF sample resulted in the richest TPC and TFC, with values of 97.63 mg gallic acid equivalent (GAE)/100 g FW and 0.96 mg quercetin equivalent (QE)/100 g FW, respectively, followed by the PC samples, whereas PD was richest in TAC (55.33 mg cyanidine-3-O-glucoside/100 g FW) (Table 3). Previously, the TPC content of P. cerasifera ‘Mirabolano’, P. domestica cv ‘President’, and P. salicina cv ‘Shiro’ at different stages of development was analyzed [40]. All plumes exhibited the highest TPC at the date of commercial harvesting—at about 100 days for ‘Mirabolano’, 130 days for ‘President’, and more than 110 days for “Shiro”. TPC values in a range from 1.34 to 6.11 g/kg FW were found for the red and purple myrobalan plum fruits, respectively [31]. Values from 1.74 to 3.75 g/Kg FW were recorded for the Stanley and French Damson fresh plums, respectively [41]. A lower TPC content was found by Gündüz et al. [ 8], who investigated P. cerasifera selections from Turkey and found values between 136.8 to 583.1 mg GAE/kg FW for ‘Ozark Premier’, and ‘Selection No. 3’, respectively. On the contrary, our data are lower than those found for P. divaricata “Demal” and P. domestica ‘Sugar plum’, with TPC values of 169.6 and 172.4 mg GAE/100 g, respectively [40]. TFC values from 12.1 to 29.1 mg rutin equivalent/100 g were found for ‘Demal’ and P. domestica ‘Red plum’, respectively [42]. TPC values ranging from 177–365 mg GAE/100 g were found for P. divaricate yellow and black, respectively [43]. Gil et al. [ 44] analyzed several Californian flesh plums and found that ‘Black Beaut’ was richer in TPC and TCC in comparison to ‘Angeleno’, ‘Red Beaut’, ‘Wickson’, and ‘Santa Rosa’ cultivars. Moreover, several research papers demonstrated that qualitative and quantitative variability in TPC, is often related to different genetic factors and developmental stages [45]. Our data on TAC are in line with those reported for P. cerasus varieties ‘Kántorjánosi’, ‘Újfehértói fürtös’, and ‘Debreceni bötermö’ (TAC values of 21, 56, and 63 mg cyanidine-3-O-glucoside/100 g FW, respectively) but lower than those found for the varieties ‘Csengödi csokros’ and ‘Cigánymeggy (TAC values of 295 and 206 mg cyanidine-3-O-glucoside/100 g FW, respectively) [46]. A higher TAC was found for P. domestica ‘Santa Rosa’ and ‘African Rose’, with values of 164.13 and 326.83 mg cyanidine-3-O-glucoside/100 g FW, respectively, and for P. cerasifera from Georgia (109.77 mg cyanidine-3-O-glucoside/100 g FW) [47,48]. ## 3.3. LC-HRMS Analysis The LC-HRMS profiles of the PC, PD, and PF extracts highlighted the occurrence of several metabolites in P. cerasifera, the structures of which could be assigned by a comparison between the molecular formulae, fragmentation patterns, and retention times and the literature data and metabolite databases, allowing us to putatively identify hydroxycinnamic acid derivatives, flavonols, flavan-3-ols and proanthocyanidins, anthocyanins, organic acids, sugar alcohols (and their derivatives), glycosylated hydroxybenzaldehyde and benzylic alcohol derivatives, glycosyl terpenates, and glycosylated aliphatic alcohol derivatives (Table 4) [37,49,50,51,52,53,54]. Except for compounds 28 and 67, which were previously described in the Rosaceae family [55,56] but not in the genus Prunus, and for compounds 35, 36, 43, and 54 described in families other than Rosaceae [57,58,59], to the best of our knowledge, most of these compounds have already been detected in plants belonging to the genus Prunus [60,61,62,63,64,65,66,67,68,69,70,71,72,73] but not in the species cerasifera. Only compounds 1, 2, 5, 7, 10, 12, 15, 18, 20, 29, 34, 42, 58, and 75 have already been described in this species [31,45,74,75,76,77,78]. Among the most represented group of metabolites, the derivatives of hydroxycinnamic acid, some differences could be appreciated among the three P. cerasifera extracts (Table 4). PF and PC showed, in fact, a higher number of metabolites belonging to this class with respect to the PD extract. It is noteworthy that compounds 21, 25, 31, 35, and 36, containing a feruloyl unit in their structure, and compounds 29, 42, and 64, corresponding to methyl ester derivatives of quinic or coumaroyl acid, were not detectable in PD. Moreover, the caffeoyl- and coumaroyl-quinic acid isomers 6–7, and 17–19, along with the caffeoylhexose 8 and the coumaroyl dihexoside isomers 9 and 14 were detectable in PD at a minor intensity level with respect to the other two extracts, as well as the mono-acylated forms of coumaroyldihexoside isomers (22, 27, 32, and 37). Furthermore, in PD, the coumaroyldihexoside isomers with two to four acetyl groups were even less intense (41, 44, 47, 59, 61, 65, 68–69, and 72–74), while the penta-acetylated forms (76–77) were not detectable at all (Table 4). It is noteworthy that compounds 41, 44, and 47 were more evident in PC than in PF. Regarding the second most representative group of metabolites identified in P. cerasifera—flavonols—once again, the PF extract was the most complete of the three, highlighting, besides the quercetin derivatives, the occurrence of isorhamnetin derivatives (50, 52, and 63) that were not evident in the other two extracts. Furthermore, PC showed the occurrence of flavonols at higher intensity levels than PD (Table 4). Flavan-3-ols (20 and 34) and proanthocyanidins A-type [49, 56] occurred in all tested extracts, even if at lower intensity in PD, which lacked the dimers of proanthocyanidins B-type (11, 16, and 23), which were, in turn, more evident in PF than in PC (Table 4). Glycosylated derivatives of cyanidin and delphinidin (10, 12, 15, 45, and 48) could be observed in PF, PC, and PD, with PC showing the occurrence of the rutinoside derivatives of both anthocyanins (15 and 45) at a minor intensity level. Simple organic acids like 2 and 5 or the nonanedioic acid 70, as well as derivatives of malic acid with sugar alcohol like sorbitol or galactitol [3], or with a ketohexose like fructose [4], were detectable in all tested extracts, unlike the glycosylated forms of the dicrotalic and abscisic acids (54 and 53), with the first being more evident in PF and PC and the second no detectable in PD (Table 4). Analogously, the glycosylated derivatives of benzyl alcohol or hydroxybenzaldehyde (13, 24, 26, and 28) (differing in sugars) could be highlighted in all the extracts, contrary to glycosylated terpenates (39 and 46), which were mainly present in PF and PC, as well as the glycosylated form of hexanol [43], likely due to their higher lipophilicity (Table 4). ## 3.4. ‘Alimena’ Myrobalan Antioxidant Potential The antioxidant activities of myrobalan bloody pulp fruit extracts were assessed using different in vitro methods, namely FRAP, β-carotene bleaching test, ABTS, and DPPH assays (Table 5). The PF sample showed the highest radical scavenging activity, with IC50 values of 19.61 and 1.19 μg/mL for the DPPH and ABTS assays, respectively. *In* general, by comparing the data obtained from the two different radical scavenging tests, it was possible to note that the DPPH· radical is less sensible than the ABTS+ in our samples. In fact, IC50 values in the range 19.61–39.02 and 1.19–2.97 μg/mL for the DPPH and ABTS tests, respectively, were obtained. Data from the ABTS test are in the same order of potency as ascorbic acid. The ‘Alimena’ pulp extracts ferric-reducing potential was evaluated by FRAP testing, revealing reduced iron in the samples, with a potency comparable to the positive control BHT. Pearson’s correlation coefficient evidenced r values of 0.79, 0.80, and 0.91 for TPC, FRAP, ABTS, and DPPH, respectively. A similar trend was also observed for TFC, wherein r values of 0.73, 0.85, and 0.95 for FRAP, ABTS, and DPPH were recorded, respectively. No positive correlation was found for TAC and the antioxidant data. Our data on radical scavenging potential are better than those reported for Indian P. cerasifera with IC50 values of 10.09 and 45.40 μg/mL for ABTS and DPPH assays, respectively [79]. On the contrary, our data obtained by FRAP testing resulted lower than those found in the “Sugar plum” cultivar (563.8 μM Fe (II)/g) [44]. The variability of antioxidant activity during the harvesting of P. domestica cv ‘President’, P. salicina cv ‘Shiro’, and P. cerasifera myrobalan was investigated. By comparing the data on the ‘Alimena’ myrobalan with those obtained by Moscatello et al. [ 40], it emerges that at the commercial maturity stage, the radical scavenging DPPH activity of our samples is significantly higher, with IC50 values in the range 56.22–76.46 µg/mL vs. 19.61–39.02 µg/mL, while similar results can be observed with the plum ‘President’ (IC50 in the range 28.63–30.35 µg/mL). Similar results against DPPH were also found for the P. domestica ‘African Rose’ and ‘Santa Rosa’ extracts (IC50 values of 13.923 and 18.416 μg/mL, respectively) [47]. On the contrary, when comparing this with domesticated P. domestica, a higher DPPH activity was observed (IC50 in the range 5.2–6.6 µg/mL for black and red fruit, respectively) [43]. Recently, Popović et al. [ 75] explored different Prunus varieties from Serbia and found high variability in the ability of hydroalcoholic extract to counteract the DPPH· radical (IC50 in the range 0.83–29.12 mg/mL for steppe cherry and red cherry plum, respectively). However, all data are lower than those found in our extracts. FRAP values ranging from 11.20 to 44.83 mmol TE/g fresh weight (FW) were found for yellow and purple Chinese myrobalan plum extract, respectively [28], whereas the FRAP values ranged between 0.123 and 0.835 mmol TE/kg FW for ‘Selection 33C 02’ and ‘Selection 31C 18’, respectively [8]. The integration of antioxidant data into the PF sample saw the most active result in terms of antioxidant activity, with the lowest RACI value (0.71) (Figure S1). ## 3.5. Inhibition of Target Enzymes Useful for the Prevention and Treatment of Hyperglycaemia and Obesity In our continuous search for foods that are able to prevent MeTs, we have investigated the ability of the bloody pulp of myrobalan, a new variety of Sicilian Prunus, to counteract the enzymes linked with hyperglycaemia and hyperlipidaemia. All investigated extracts exerted inhibitory enzyme activity in a concentration-dependent manner (Table 6). According to Nowicka et al. [ 80], the extract richest in flavonols (PF) exerted the highest α-amylase inhibitory activity (IC50 value of 34.48 μg/mL). Moreover, the PF sample was found to be more proficient in inhibiting pancreatic lipase, followed by the PD sample, with IC50 values of 29.61 and 38.16 μg/mL, respectively. Values from 49.76 to 78.87 μg/mL were found for the PC and PD samples, respectively, against α-glucosidase. The results of correlation analysis evidenced that a strong positive correlation was found between TPC and α-glucosidase inhibitory activity ($r = 0.96$) and TAC and lipase inhibitory property ($r = 0.80$). A weak correlation was found between TFC and α-amylase ($r = 0.56$). Our data on carbohydrate-hydrolyzing enzymes are in line with those reported by Kołodziejczyk-Czepas [81] for P. spinosa; they found IC50 values in a range from 15.43 to 90.95 μg/mL for hydroalcoholic extract and butanol fraction, respectively, against α-glucosidase, and from 33.47 to 110.12 μg/mL for ethyl acetate and butanol fraction, respectively, against α-amylase. A lower α-amylase inhibitory effect was found by testing the P. ceraus extracts, with IC50 values in a range from 330 to 892 μg/mL for the ‘Cigánymeggy’ and ‘Kántorjánosi’ varieties, respectively [46], and by Popović et al. [ 75], who investigated different Prunus species (IC50 values in the range 4.61–136.23 mg/mL for P. fruticose and P. pissardi ‘Carriére’, respectively). In the same work authors tested Prunus extracts against α-glucosidase and recognized a low inhibitory activity (IC50 values in the range 0.41–136.23 mg/mL). Podsędek et al. [ 82] investigated the effect of water extract obtained from P. persica fruits from Poland and found lower α-glucosidase inhibitory activity in comparison to our data (IC50 value of 264.44 mg/mL) despite a higher TPC content. Recently, Ullah et al. [ 83] demonstrated that P. domestica subsp. syriaca (‘Mirabelle’) was able to inhibit the key enzymes involved in MetS. In particular, the hydroethanolic extract inhibited α-amylase, α-glucosidase, and pancreatic lipase, with IC50 values of 7.01, 6.4, 6.0 mg/mL, respectively. All these values are higher than those found in this study. A perusal analysis of the literature revealed that the inhibition of α-glucosidase should be related to the content of the hydroxycinnamic acid derivatives that are particularly abundant in PF and PC and that exert a more potent inhibitory activity against this enzyme [84]. ## 4. Conclusions The preservation of agro-biodiversity for the purpose of consumption is at the heart of the targets of SDG No. 12 (Responsible Production and Consumption), demonstrating that the spread of a model of sustainability goes through the choices of products that have very high environmental adaptation. For this reason, the identification of new accessions, their description, and in-depth knowledge of their quality characteristics represents a virtuous model of knowledge development for consumption. The growing awareness of consumers toward the possibility of preventing and/or treating pathologies through certain defined foods represents the driving force of the global market for these types of products. In this context, we have analyzed the chemical profile and bioactivity of three extracts of a new bloody pulp selection of myrobalan (P. cerasifera). The PF extract showed the most promising bioactivity, which is in agreement with its highest phytochemical content. Further in vivo studies are necessary to better understand the biological properties and potential applications for the development of functional foods or nutraceutical products that are useful to consumers with these types of health problems.
# Evaluating the Effectiveness of Letter and Telephone Reminders in Promoting the Use of Specific Health Guidance in an At-Risk Population for Metabolic Syndrome in Japan: A Randomized Controlled Trial ## Abstract Japan has introduced a nationwide lifestyle intervention program (specific health guidance) for people aged 40–74 years. Medical insurers apply a reminder system to improve their utilization rates. This study examined the effectiveness of two methods of reminders (mailed letters and telephone calls) in a randomized controlled trial. Subscribers to National Health Insurance in Yokohama City, Kanagawa Prefecture, who were eligible for specific health guidance in 2021, were recruited. A total of 1377 people who met the criteria of having or being at risk of developing metabolic syndrome (male: $77.9\%$, mean age: 63.1 ± 10.0 years) were randomly assigned to one of three groups: a “no reminder” group, a “letter reminder” group, or a “telephone reminder” group. The utilization rates of specific health guidance were not significantly different between the three groups ($10.5\%$, $15.3\%$, and $13.7\%$, respectively). However, in the case of the telephone reminder group, a subgroup analysis showed that the utilization rate was significantly higher among participants who received the reminder than those who did not answer the calls. Although the effectiveness of a telephone reminder might be underestimated, this study suggests that neither method impacted the utilization rates of specific health guidance among the population at risk of metabolic syndrome. ## 1. Introduction Cardiometabolic diseases (CMDs), such as cardiovascular disease, stroke, diabetes, and chronic kidney disease, are the leading causes of mortality worldwide [1]. The main cause of CMDs is a cluster of metabolic derangements known as metabolic syndrome. The underlying factors for the incidence of metabolic syndrome include obesity, physical inactivity, and older age [2]. Therefore, with the increasing rates of obesity [3] and adoption of sedentary lifestyles [4], in combination with the aging of the global population, there is an urgent need to screen for CMD risks and to derive novel prevention strategies. In this context, several countries, including Japan, are striving to establish screening programs and lifestyle intervention strategies in order to promote the primary prevention of CMDs. In 2008, Japan introduced a nationwide screening program (i.e., health checkups) to identify individuals with high obesity and cardiovascular risks (known as metabolic syndrome). In addition, the country established specific health guidance (i.e., a lifestyle intervention program) to reduce cardiovascular risk factors [5]. These services are provided to all adults aged 40–74 years every year and are delegated to medical insurers by the Act on Securing Medical Care for the Elderly. In fact, medical insurers have taken various measures to improve the utilization rates of the programs by the targeted population. According to recent statistics collected in Japan, approximately 30 million people underwent screening and 1.2 million people utilized specific health guidance in 2019 [6]. Meta-analyses revealed that lifestyle intervention can reduce cardiometabolic risks [7,8,9,10]. In contrast, a large-scale community-based study concluded that individually tailored lifestyle interventions had no effect on ischemic heart disease, stroke, or mortality on the population level after 10 years [11]. In addition, Japanese studies using a quasi-experimental design reported limited effects of lifestyle interventions on cardiometabolic risk factors [12]. Thus, evidence for the effects of health guidance remains controversial. Nevertheless, since specific health guidance is already a national measure, considering better methods to improve the utilization rate is important for both current and future systems. Many medical insurers in Japan use reminders to improve the utilization rates of specific health guidance. Systematic reviews have reported the effectiveness of such reminder systems for cancer screening [13]. In the case of general health checkups, a study in the United Kingdom showed that the utilization rate was higher when short message services were used as reminders compared to when the usual letter reminders were used [14]. It has also been reported that telephone reminders are more effective in increasing participation in health checkups than letter reminders [15]. However, the effectiveness of reminders seems to differ depending on the population’s demographic characteristics, such as race/ethnicity [16]. Previous findings mainly originate from studies conducted in Western countries. Therefore, it is necessary to confirm whether these findings are applicable to other populations, such as those in Japan. The purpose of this study was to examine the effectiveness of reminders in promoting the utilization of specific health guidance using a randomized controlled trial design. In this study, we used two reminder methods (letters and telephone calls). In addition, this study focused on people who are considered at high risk of metabolic syndrome, given the requirement for reminders among this population. ## 2.1. Sample and Procedures The target population was National Health Insurance subscribers in Yokohama City, Kanagawa Prefecture, Japan (approximately 510 thousand subscribers as of April 2021). Yokohama is the capital of Kanagawa Prefecture and is located 30 km southwest of central Tokyo. As of April 2021, Yokohama City had a population of approximately 3.78 million. At the time of this study, National Health Insurance covered approximately $20\%$ of the total city population. Among the National Health Insurance subscribers in Yokohama, 460,928 were eligible for health checkups in the fiscal year (FY) of 2021 (i.e., between April 2021 and March 2022), and 113,945 received health checkups. Of these, 13,638 were eligible for specific health guidance based on national criteria. Among them, 10,763 people who were deemed to require immediate medical attention (based on the national criteria using the results of health checkups, renal function tests, blood pressure, complete blood counts, lipid panels, blood sugar levels, and liver function tests) were excluded. Consequently, we included 1377 people (355 met the criteria for metabolic syndrome and 1022 were considered to be at risk of metabolic syndrome). In FY2021, among those who underwent health checkups, $14.8\%$ and $56.9\%$ were judged as “applicable” and “at risk” of metabolic syndrome, respectively. The implementation rate of specific health guidance in Yokohama City was $9.3\%$ in FY2020. This study adopted a random sampling method to enhance the internal validity of the findings. The participants were randomly assigned to three groups: the “no reminder” group ($$n = 458$$), the “letter reminder” group ($$n = 459$$), or the “telephone reminder” group ($$n = 460$$). Random assignment was conducted by the staff of Yokohama City. The staff provided a unique number to every participant and assigned them randomly to one of the three groups using a random number generator. This process was performed each month. The data analysts, but not the participants, were blinded to the information on the group assignment. A flow diagram of the sampling and allocation processes is shown in Figure 1. ## 2.2. Intervention We adopted letter and telephone reminder interventions in this study. The interventions were administered by the staff of Yokohama City. The information provided to the participants via either letter or telephone call was not personalized. ## 2.2.1. Letter Reminder A reminder was mailed to the participants’ home addresses. The main components of the letter were an “explanation of the specific health guidance (including information that the specific health guidance was free of charge)”, “the expiration date of the specific health guidance”, “information on the medical centers/hospitals/clinics where the specific health guidance is provided”, and “telephone number for inquiries”. The expiration date was determined according to the month in which the participants underwent a health checkup. The coupon for specific health guidance was valid for two months from the time of dispatch. ## 2.2.2. Telephone Reminder The public health nurse called the participants on weekdays using the phone numbers that the participants had provided as their contact information when they were enrolled in the National Health Insurance program. The information provided to the participants was compiled into a manual. The main contents were “a brief explanation of the results of the health checkups”, “the explanation of the specific health guidance (including information that the specific health guidance was free of charge)”, “the expiration date of the specific health guidance”, and “information on the medical centers/hospitals/clinics where the specific health guidance is provided and the way to make an appointment”. In cases of disconnection, the public health nurse re-called the participant on different weekdays (up to three times). If family members answered the phone, the public health nurse told them to re-call on different days and asked them to encourage the participant to receive specific health guidance. Of 460 individuals assigned to the telephone reminder group, the public health nurse was able to directly reach 274 participants ($59.6\%$) and to leave a message with the family members of 34 participants ($7.4\%$). The public health nurse could not reach the remaining 152 participants, and they did not receive the telephone reminder. ## 2.3.1. Outcomes The outcome variable was whether or not the participants utilized specific health guidance in FY2021. Information on the participants’ use of specific health guidance was obtained from the Data Management System of Yokohama City. ## 2.3.2. Participants’ Characteristics We used the participants’ demographics (sex and age) and the results of the health checkups obtained via the Data Management System. The results of the health checkups included the abdominal circumference, body mass index, diastolic blood pressure, systolic blood pressure, HbA1c, fasting blood glucose, triglyceride, high-density lipoprotein cholesterol, history of diseases (cerebrovascular diseases, cardiovascular diseases, chronic kidney failure, and dialysis therapy), smoking habits (“Do you currently smoke habitually?”; yes or no), exercise habits (“Do you exercise lightly for at least 30 min two days a week for at least one year?”; yes or no), and frequency of drinking (“How often do you drink alcohol?”; every day, sometimes, rarely, or never). ## 2.4. Statistical Analysis First, the participants’ characteristics were compared between the three groups using the chi-square test, Fisher’s exact test, and Kruskal–Wallis test. For continuous variables, confirmed to be not normally distributed by the Shapiro–Wilk test, the non-parametric test was conducted (i.e., the Kruskal–Wallis test). Second, the outcome variable (i.e., the utilization of specific health guidance) was compared between the three groups using the chi-square test. For multiple comparisons, the Bonferroni correction was adopted with a significance level (α) of $1.7\%$ (i.e., $p \leq 0.017$ (=$\frac{0.05}{3}$)). The analysis was performed using IBM SPSS Statistics 29 (IBM Corp., Armonk, NY, USA). Previous studies have suggested that reminders of health checkups increase the uptake rate by a factor of approximately 1.5–1.7 times [14]. The utilization rate of specific health guidance in Yokohama City in FY2020 was $9.3\%$ in total. However, it was $13.3\%$ for the subpopulation of people who met the criteria for metabolic syndrome or were at risk of metabolic syndrome. Assuming a significance level (α) of 0.05 (actually calculated as 0.017, considering multiple comparisons) and a power (1 − β) of 0.80, the total number of necessary samples was projected to be 1380 cases (460 in each group). ## 3. Results Table 1 shows the characteristics of the participants. Overall, $77.9\%$ were male, and the average age was 63.1 ± 10.0 years. No differences were observed between the three groups in any of the variables. Table 2 presents the utilization rates of specific health guidance among the three groups. The utilization rates were $10.5\%$ in the no-reminder group, $13.7\%$ in the letter reminder group, and $15.3\%$ in the telephone reminder group, with no significant differences between the three groups (χ2 = 4.753, $$p \leq 0.093$$). Moreover, no differences were found between any two groups in multiple comparisons. Although they are not shown in the table, we compared the utilization rates of the participants whose calls were answered either by them directly or by their family members ($$n = 308$$, $67.0\%$) and those who were not reachable by telephone ($$n = 152$$, $33.0\%$) in the telephone reminder group. The utilization rates were $16.9\%$ (52 of 308) and $7.2\%$ (11 of 152), respectively (χ2 = 8.012, $$p \leq 0.004$$). This difference remained significant even after adjusting for sex, age, body mass index, and history of disease. ## 4. Discussion In this study, we examined the effectiveness of two reminder methods in regard to the rate of utilization of specific health guidance (i.e., letters and telephone calls) using a randomized controlled trial design. Most medical insurers in Japan use a call–recall methodology to improve the implementation rate of specific health guidance. However, its effectiveness has not been yet sufficiently verified. This study focused on widely used reminder methods that can contribute to the establishment of evidence-based health activities. The analysis did not demonstrate an improvement in the utilization rate after either letter or telephone reminders compared to no reminder. This result differs from those of previous studies regarding general health checkups and cancer screening [13,14,15,17], which confirmed the effectiveness of reminders. One possible reason for this inconsistency may be that individuals refrained from following specific health guidance due to the recent coronavirus disease 2019 (COVID-19) pandemic. The COVID-19 pandemic has affected many aspects of people’s behaviors in daily life. In fact, the nationwide implementation rate of specific health guidance had been increasing every year until FY2019 (i.e., before the COVID-19 outbreak); however, in FY 2020, during the outbreak, it decreased (overall: from $29.3\%$ to $27.9\%$, male: from $27.5\%$ to $26.4\%$, and female: from $32.9\%$ to $30.9\%$) [6]. The spread of COVID-19 has augmented people’s fear of going out and visiting places where people gather, and thus the utilization of specific health guidance might be impacted. Another possibility is the limited population targeted in this study, i.e., people who have metabolic syndrome or are at risk of developing metabolic syndrome. Although we focused on this population because of their higher need for lifestyle interventions, they might have special circumstances that hinder their use of specific health guidance, which could lead to an underestimation of the effectiveness of reminders. We considered these two possibilities in our analysis. Earlier studies regarding the use of a letter invitation/reminder to attend health checkups reported a lack of impact of letters [16,18,19]; however, the letter is the most common method used to invite/remind individuals about health checkup participation. This study also revealed that the effect of the letter reminder might be weak. The letter reminder used in this study was generic. Since some studies demonstrated that tailoring messages in the letter to the individual’s level of risk could increase the participation rate in cancer screening programs [20,21], personalization of the letter may lead to a higher level of utilization of health guidance. Previous studies revealed that telephone reminders are more effective than letter reminders [15,17]. A qualitative study conducted in the United Kingdom reported that participants could directly make an appointment for consultation or to obtain health guidance via telephone reminder, which could contribute to increased utilization [22]. However, in Yokohama City, owing to the system of specific health guidance, it was not possible to make an appointment for specific health guidance over the phone directly; the participants had to make an appointment later by themselves. The inefficiency of this process may have reduced the effectiveness of telephone reminders. In addition, the telephone reminder group included both participants who could be reached by a public health nurse and those who could not; thus, the effectiveness of the telephone reminder could have been underestimated. However, as specific health guidance is provided for those aged 40–74 years, including the working-age population (e.g., those in their 40s and 50s), from a practical perspective, it is difficult to access all of the target population when the telephone reminder is performed on weekdays. To increase the effectiveness of telephone reminders, a more flexible system, such as calling in the evening/nighttime or on weekends, should be implemented. Although this study showed no difference in the utilization rate of specific health guidance between the three groups, this does not necessarily mean that reminders are ineffective. This study focused on individuals who met the criteria for metabolic syndrome or were considered at risk of developing metabolic syndrome. Therefore, the findings suggest that reminders directed towards this population may be given lower priority. Based on this study, future investigations could be conducted to verify which populations will benefit most from the reminders system. This study has several limitations. First, as previously mentioned, the current study targeted only those with metabolic syndrome or subjects at risk of developing this disease. The effects on other populations need to be investigated in future studies to enhance the external validity. Second, this study was conducted in Yokohama. The possibility that the results may differ between regions with different medical resources and resident characteristics cannot be denied, and the generalizability of the findings must be carefully considered. Third, this study was performed in FY2021, the second year of the COVID-19 pandemic. People’s attitudes toward the utilization of specific health guidance could have been influenced by the outbreak. Therefore, the present findings might not necessarily be applicable to the “post-COVID-19 era”. Fourth, many other factors prevent people from using specific health guidance (e.g., the inconvenience of making an appointment for specific health guidance and inaccessibility of the implementation site). Therefore, it might not be sufficient to improve the utilization rate by implementing reminders alone. Fifth, we were not able to investigate how many participants in the letter reminder group actually read the letter. The effects of the letter reminder might have differed between those who read the letter and those who did not. This means that we might have underestimated the effectiveness of the letter. Finally, this study investigated the effectiveness of letter and telephone reminders. However, there are some other reminder options (e.g., short message service (SMS) and e-mail), and these effects should be examined in the future. ## 5. Conclusions We examined the effectiveness of two types of reminder methods (i.e., letters and telephone calls) in regard to the utilization of specific health guidance using a randomized controlled trial design for individuals with metabolic syndrome or those who were at risk of developing it. The results suggest that low priority is assigned to the task of reminding people in the population at risk of metabolic syndrome. Nonetheless, this study possibly underestimated the effectiveness of reminders. Reminders using either letters or telephone calls are labor- and cost-intensive to some degree. Thus, more effective and efficient methods should be explored for the implementation of reminders. Medical insurers utilize a reminder method to increase the implementation rate of health checkups and health guidance worldwide. Previous studies regarding the effectiveness of reminders are mainly derived from Western countries such as the United Kingdom. However, as it has been implied that the population’s demographic characteristics may affect the effectiveness of reminders [16], a study should be conducted to clarify the effectiveness of the reminder methods in each context. In addition, there are other methods, such as SMS and e-mail, other than the letter and telephone reminder methods investigated in this study, and more effective methods will probably emerge with the advancement of technology. Such methods also need to be verified using a robust design.
# A Novel Mix of Polyphenols and Micronutrients Reduces Adipogenesis and Promotes White Adipose Tissue Browning via UCP1 Expression and AMPK Activation ## Abstract Background: *Obesity is* a pandemic disease characterized by excessive severe body comorbidities. Reduction in fat accumulation represents a mechanism of prevention, and the replacement of white adipose tissue (WAT) with brown adipose tissue (BAT) has been proposed as one promising strategy against obesity. In the present study, we sought to investigate the ability of a natural mixture of polyphenols and micronutrients (A5+) to counteract white adipogenesis by promoting WAT browning. Methods: *For this* study, we employed a murine 3T3-L1 fibroblast cell line treated with A5+, or DMSO as control, during the differentiation in mature adipocytes for 10 days. Cell cycle analysis was performed using propidium iodide staining and cytofluorimetric analysis. Intracellular lipid contents were detected by Oil Red O staining. Inflammation Array, along with qRT-PCR and Western Blot analyses, served to measure the expression of the analyzed markers, such as pro-inflammatory cytokines. Results: A5+ administration significantly reduced lipids’ accumulation in adipocytes when compared to control cells ($p \leq 0.005$). Similarly, A5+ inhibited cellular proliferation during the mitotic clonal expansion (MCE), the most relevant stage in adipocytes differentiation ($p \leq 0.0001$). We also found that A5+ significantly reduced the release of pro-inflammatory cytokines, such as IL-6 and Leptin ($p \leq 0.005$), and promoted fat browning and fatty acid oxidation through increasing expression levels of genes related to BAT, such as UCP1 ($p \leq 0.05$). This thermogenic process is mediated via AMPK-ATGL pathway activation. Conclusion: Overall, these results demonstrated that the synergistic effect of compounds contained in A5+ may be able to counteract adipogenesis and then obesity by inducing fat browning. ## 1. Introduction Obesity is a pandemic health problem [1]. In 2016, the World Health Organization (WHO) estimated that 650 million adults, 340 million adolescents and 39 million children were affected by obesity, and these numbers are growing fast [2]. This condition has been worsened by increased junk food consumption, highly enriched with sugar and fat, that contributes to the development of visceral adiposity, which is strongly associated with cardiovascular diseases (CVD) [3]. Visceral adiposity is primarily composed of white adipose tissue (WAT) and is the main type of adipose tissue serving as energy storage. WAT also acts as an endocrine organ, secreting several pro-inflammatory cytokines, such as tumor necrosis factor (TNF)-α, Interleukin (IL)-6, and leptin, among others [4]. In a state of obesity, the significant increase in WAT and in cytokine levels led to the onset of a pro-inflammatory state typical of this pathological condition and its related disorders (insulin resistance, diabetes mellitus, and CVD). Recently, it has been proposed that WAT transdifferentiation into brown adipose tissue (BAT), a phenomenon known as browning, may be a novel approach to counteract obesity [5]. BAT activation enhances energy expenditure and promotes a negative energy balance reducing weight gain in animal models [6,7]. BAT uncouples fatty acid oxidation from adenosine triphosphate (ATP) production, dissipating energy as heat [8]. This beneficial process is primarily mediated by AMP-activated protein kinase (AMPK) that, when triggered by specific impulses, such as cold and/or fasting, induces phosphorylation and activation of adipose triglyceride lipase (ATGL), leading to an increase in lipolysis and fatty acids (FA) release [7,9]. These FA, in turn, bind to the uncoupling protein 1 (UCP1), a protein located in the inner mitochondrial membrane, promoting the dissipation of an electrochemical gradient as heat [9]. Based on these known mechanisms, several pharmacological and nutritional approaches have been proposed to counteract obesity and fat accumulation [10]. Among nutritional compounds, polyphenols showed a significant anti-obesity effect by regulating lipid metabolism [11]. Resveratrol, the most studied among polyphenols, promotes BAT metabolism by increasing expression of UCP1 in rodents [12]. However, the major limitation in the clinical application of polyphenols, especially resveratrol, is their low bioavailability [13]. To avoid this problem, several resveratrol derivatives with enhanced bioavailability have been proposed and investigated, such as the glycosylated derivate polydatin and the methoxylated derivative pterostilbene [14]. Chronic pterostilbene administration in mice fed with a high fat diet has already been reported to improve lipid metabolism and to promote expression of UCP1 and other factors related to BAT [15]. Recently, we demonstrated that a novel mix of polyphenols and micronutrients, called A5+, was able to protect against inflammation by reducing cytokines-mediated processes in different in vitro experimental models [16,17]. Based on these findings, the present study aimed to evaluate the effects of A5+ in counteracting adipogenesis by promoting WAT browning in a model of 3T3-L1 murine fibroblasts. ## 2.1. Cell Culture, Differentiation and Treatments A murine 3T3-L1 fibroblast cell line was provided by Prof. Massimiliano Caprio (San Raffaele Open University) and cultured in Dulbecco’s Modified Eagle’s Medium (DMEM, 4.5 g/L glucose) (Gibco, Thermo Fisher Scientific, Waltham, MA, USA), supplemented with $10\%$ Fetal Calf Serum and $1\%$ penicillin/streptomycin (Gibco, Thermo Fisher Scientific, Waltham, MA, USA) at 37 °C in a humidified, $5\%$ CO2 atmosphere. To induce differentiation, as previously reported [18], cells were seeded at the desired concentration in the culture medium. When they reached confluence, the medium was changed. The new differentiation medium was composed of DMEM 4.5 g/L glucose supplemented with $10\%$ Fetal Bovine Serum (FBS, Corning, NY, USA), $1\%$ penicillin/streptomycin, 1 µg/mL insulin, 0.5 mM isobutylmethylxanthine (IBMX), and 1 µM dexamethasone, 50 µM A5+ (SirtLIfe srl, Rome), or DMSO (for control cells) (Sigma Aldrich, Saint Louis, MO, USA) for 2 days. On day 2, the differentiation medium was replaced with DMEM (4.5 g/L glucose) containing $10\%$ FBS, 1 µg/mL insulin, and 50 µM A5+, or DMSO (for control cells), until day 10. The medium was changed every 2 days until day 10. A5+ is composed of ellagic acid ($20\%$), polydatin ($98\%$), pterostilbene ($20\%$), and honokiol ($20\%$), mixed with recommended doses of zinc, selenium, and chromium. It is dissolved in DMSO at 1 mg/mL, as reported by Pacifici et al. [ 17]. ## 2.2. Oil Red O Staining Oil Red O staining was performed to quantify the intracellular lipid content as previously described [18]. Briefly, 1 × 105 cells were seeded in a 6-multiwell plate and differentiated as reported in Section 2.1. Then, the cells were washed and fixed with $4\%$ formalin (Sigma Aldrich, Saint Louis, MO, USA). Subsequently, the cells were incubated with $60\%$ isopropanol (Sigma Aldrich, Saint Louis, MO, USA) and then stained with Oil Red O solution (0.5 g/L, Sigma Aldrich, Saint Louis, MO, USA). The dye solution maintained by the cells was dissolved in pure isopropanol and quantified at 490 nm by using the Multiskan FC microplate reader (Thermo Fisher Scientific, Waltham, MA, USA). ## 2.3. Proliferation Assay For cell proliferation, 2 × 104 cells were plated in a 24-multiwell plate and differentiated as previously reported. At time 0 and at 48 h, the cells were detached using trypsin solution $0.05\%$ (Gibco, Thermo Fisher Scientific, Waltham, MA, USA), then they were centrifuged and the pellet was resuspended in culture medium. Then, 10 µL o cell resuspension was added to 10 µL of trypan blue (Sigma Aldrich, Saint Louis, MO, USA) and analyzed with a Countess Automated Cell Counter (Thermo Fisher Scientific, Waltham, MA, USA). ## 2.4. Cell Cycle Analysis Cell cycle analysis was performed using Propidium Iodide staining as reported in Pacifici et al. [ 17]. Briefly, the cells were seeded at 1 × 105 in a 6-multiwell plate and differentiated as reported in Section 2.1. Then, both the supernatants and cells were collected in a FACS collection tube and centrifuged at 1600 rpm for 5 min. Subsequently, the supernatant was discarded, and the pellet was fixed with $70\%$ ethanol for 45 min [19]. Finally, the cells were washed with PBS, stained with PI solution, and analyzed using cytofluorimetric analysis. ## 2.5. Inflammatory Array Cytokines profile was analyzed in the supernatants of differentiated cells using the Mouse Inflammation Array C1 (Ray-Biotech, Inc., Norcross, GA, USA), as previously reported [17]. Briefly, the cells were treated as described in Section 2.1; at day 10, the supernatants were collected, centrifuged to remove cell debris, and used for the assay. Membranes with 40 spotted cytokine antibodies were blocked with the supplied blocking buffer and then incubated overnight at +4 °C with the supernatants. The next day, the membranes were washed and incubated overnight at +4 °C with a biotinylated antibody cocktail. The next day, the membranes were washed, and HRP-Streptavidin solution was added over night at +4 °C. The following day, the membranes were washed and detected by chemiluminescence. The membranes map is reported in Table 1. ## 2.6. Gene Expression Analysis *For* gene expression analysis, total RNA was isolated by using TRIzol reagent (Invitrogen, Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer’s protocol. Then, 2.5 µg of total RNA was reverse transcribed into cDNA by using the High-Capacity cDNA Archive Kit (Invitrogen, Thermo Fisher Scientific, Waltham, MA, USA). qRT-PCR was performed using the ABI Prism 7500 instrument (Applied Biosystem, Thermo Fisher Scientific, Waltham, MA, USA). cDNA amplification was assessed using a specific primer reported by Marzolla et al. [ 20] (UCP1, Adbr3, Cidea, DIO2, Cpt1beta, Cpt2, Crat, ACADM, ACADL, Hadha, Aco2, Idh3a Sdhac, Cs), and PowerUp SYBR green dye (Invitrogen, Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer’s protocol. All samples were normalized using TATA-box binding protein (TBP) as an internal control; the relative quantification was calculated using the comparative ΔΔCT method, and the values were expressed as 2−ΔΔCT. ## 2.7. Western Blot Analysis The 3T3-L1 cell pellets were lysed at 4 °C in an HNTG lysis buffer ($1\%$ Triton X-100, 50 mM HEPES, $10\%$glycerol, 150 mM NaCl, $1\%$ sodium deoxycholate) supplemented with Phosphatase Inhibitor Cocktail 2 and 3 (Sigma Aldrich, Milan, Italy) and protease inhibitor cocktail (Sigma Aldrich, Milan, Italy). A clear supernatant was obtained by centrifugation of lysates at 13,000× g for 15 min at 4 °C. Protein concentration was determined using a BCA protein assay kit (Pierce; Thermo Fisher Scientific, Milan, Italy). Protein samples were subjected to sodium dodecylsulfate polyacrilamide gel electrophoresis (SDS-PAGE) using Miniprotean precast gels (BioRad; Segrate, Italy) and electroblotted onto nitrocellulose membranes (Bio-Rad, Segrate, Italy). Membranes were blocked for 1 h at room temperature (RT) with $5\%$ non-fat milk in Tris-Buffered Saline with $0.05\%$ Tween 20 (TBS-T). Incubation with primary specific antibodies was performed in the blocking solution ($5\%$ milk or bovine serum albumin in TBS-T) overnight at 4 °C and horseradish peroxidase-conjugated secondary antibodies (in blocking solution) for 1 h at RT. We used antibodies against AMPK-α 1:1000 (Cell Signaling, Danvers, MA, USA), phospho-AMPK-α (Thr172) 1:1000 (Cell Signaling, Danvers, MA), ATGL 1:1000 (Cell Signaling, Danvers, MA), phospho-ATGL (Ser406) 1:1000 (Abcam Cambridge, MA, USA), and UCP1 1:1000 (Abcam Cambridge, MA, USA). The appropriate secondary horseradish peroxidase-conjugated antibodies from Jackson Immunoresearch were used in the blocking solution (1:5000). Immunoreactive bands were visualized by Luminata Forte Western Chemiluminescent HRP substrate (Millipore (Merk); Milan, Italy) using an ImageQuant LAS 4000 (GE Healthcare). Equal samples loading was confirmed using GAPDH 1:30,000 (Sigma Aldrich, Milan, Italy) and bands quantified by densitometry using the ImageQuant TL software from GE Healthcare Life Sciences. ## 2.8. Statistical Analysis All data were analyzed using GraphPad Prism 9 (La Jolla, CA, USA). An unpaired two-tailed Student’s test was used for statistical analysis and significance. All data were expressed as mean ± SEM. Values of $p \leq 0.05$ were considered statistically significant. ## 3.1. A5+ Blunts Intracellular Lipid Accumulation In order to test whether A5+ was able to reduce intracellular lipid accumulation, we induced 3T3-L1 differentiation into a mature adipocyte phenotype. Then, we stained the differentiated cells with an Oil Red O solution that recognized triglycerides and lipids. As reported in Figure 1, A5+ administration significantly reduced lipid accumulation when compared to control cells, as confirmed by the Oil Red O absorbance at 490 nm ($p \leq 0.005$). These results indicated a direct effect of this compound on the mechanisms associated with fat storage. To further validate a reduction in adipogenesis, we also analyzed the mRNA expression of some adipogenic factors (Figure 1, Panels b–d). Accordingly, we observed a significant increase in FAB4 ($p \leq 0.001$) and adiponectin expression ($p \leq 0.05$) in the A5+-treated cells. Moreover, PPARγ levels were increased following A5+ administration, in agreement with its ability to promote adipogenesis in both white in brown adipose tissue, and to boost the brown-fat characteristics in white adipose tissue [21]. Taken together, these data suggest an involvement of A5+ in reducing white adipocytes maturation. ## 3.2. A5+ Inhibits Cell Proliferation by Arresting the Cell Cycle in G2-M Phase Mitotic clonal expansion (MCE) is one of the most relevant stages in adipocytes differentiation. MCE is the moment where the cells reentered the cell cycle and promoted the transcription of several genes involved in 3T3-L1 adipocytes differentiation [22]. Based on the importance of MCE, we tested whether A5+ could act at this stage by reducing cell proliferation and thus, the differentiation driving force. Cells were plated at 1 × 105 cells/well in a 6-multiweel plate and differentiation was induced as previously reported. Then, at day 2, cell number and cell cycle were assessed. As expected, while physiological proliferation occurred in control cells, A5+ administration significantly reduced cell proliferation ($p \leq 0.005$) (Figure 2, Panel a). We also evaluated the cycle to confirm the cell growth arrest mediated by the selected compound. As reported in Figure 2, Panel b, cells treated with A5+ showed a cell cycle arrest in G2-M phase compared to control cells ($p \leq 0.05$). These results were further confirmed by the G2-M cell cycle arrest observed during the follow-up of this process with a peak at day 10 ($p \leq 0.0001$) (Figure 2, Panel c). ## 3.3. A5+ Administration Blunts Inflammatory Cytokines Release in Adipocytes It is well known that mature adipocytes secrete several pro-inflammatory cytokines, thereby contributing to systemic inflammation and complications in obese subjects [23]. In order to evaluate whether this novel compound may impact on inflammation, we tested the secretion levels of several cytokines directly involved in adipocytes maturation and lipid accumulation in a differentiated mature 3T3-L1 adipocytes medium. As shown in Figure 3, A5+ administration significantly reduced the release of BLC, Eotaxin 1, IL-6, Leptin ($p \leq 0.005$), the chemokin CXCL9 ($p \leq 0.05$), RANTES ($p \leq 0.001$), and TIMP1 ($p \leq 0.05$) when compared to control cells. These data further highlight the relevant anti-inflammatory effect of polyphenols in general, and A5+ in particular. These findings are also in agreement with our previous data [17]. ## 3.4. A5+ Promotes Fat Browning Recently, a novel strategy to counteract obesity has been reported: it is based on the increase in activity and/or amount of brown adipose tissue (BAT), which, as opposed to WAT, dissipates energy by generating heat and leading to a negative energy balance and weight loss [6]. Based on our previous results, we tested whether reduction in lipid content after A5+ treatment may be attributed to fat browning. Therefore, we differentiated cells and isolated RNA to evaluate gene expression levels of important genes related to BAT. As reported in Figure 4, cells treated with A5+ displayed significantly increased levels of UCP1 ($p \leq 0.05$), Adrb3 ($p \leq 0.0001$), and Cidea ($p \leq 0.05$). A positive but non-significant trend was also shown for DIO2. These data demonstrated that this natural compound was able to promote fat browning, suggesting a potential role in blunting fat accumulation and obesity by triggering the switch from WAT to BAT. ## 3.5. A5+ Regulates Lipid Metabolism Fatty acid (FA) oxidation is essential to induce UCP1 expression and, thus, to maintain and develop fat browning [24]. Based on our results showing the up-regulation of browning-related genes, we decided to analyze expression levels of genes involved in FA oxidation (Figure 5). As expected, genes involved in mitochondrial FA uptake, in particular Cpt2, significantly increased in A5+-treated cells when compared to control (ctr) cells ($p \leq 0.05$) (Figure 5, Panel a). Moreover, following A5+ administration, all analyzed components linked to FA oxidation increased when compared to ctr cells (ACADM: $p \leq 0.05$; ACADL: $p \leq 0.005$; Hadha: $p \leq 0.05$) (Figure 5, Panel b). The Acetyl-Coa derived from FA, metabolized by FAO, enters the TCA cycle to produce the most relevant cofactors essential for mitochondrial respiration [25]. According to the previously shown results genes involved in the TCA cycle were upregulated after treatment (Aco2 and Idh3a: $p \leq 0.005$; Sdhac and Cs: $p \leq 0.05$) (Figure 5, Panel c). Taken together, these data suggest that A5+ regulates brown fat thermogenesis. ## 3.6. A5+ Regulates Cellular Lipid Metabolism in 3T3-L1 via AMPK-ATGL Pathway The observation that A5+ treatment increases the expression of thermogenesis-related markers prompted us to investigate the molecular mechanisms underlying the browning of 3T3-L1 adipocytes. 3T3-L1 pre-adipocytes were differentiated, in complete medium, in the presence or absence of A5+ for 10 days. A5+ effects on 3T3-L1 cells were assessed using western blot analysis of UCP-1 protein expression in terminally differentiated 3T3-L1 cells (day 10). A significant increase of UCP-1 protein expression was observed in A5+-treated 3T3-L1 cells when compared with control cells ($p \leq 0.05$) (Figure 6, Panel b). Given the well-known role of AMP-activated protein kinase (AMPK) as a sensor of intracellular energy state by regulating FA metabolism and thermogenesis in adipose tissue [26], we investigated whether A5+ was able to activate AMPK. We observed that A5+ administration induced a significant increase of AMPK-α phosphorylation at threonine-172 (Thr172) at day 10 of 3T3-L1 cell differentiation, indicating its capacity to induce AMPK activation ($p \leq 0.001$) (Figure 6, Panel b). Adipose triglyceride lipase (ATGL) can be phosphorylated at serine-406 (Ser406) by AMPK to increase its catalytic activity and, in turn, lipolysis in adipocytes [27].Therefore, we examined ATGL phosphorylation at Ser406 in A5+-treated 3T3-L1 cells and observed that it was significantly increased in A5+-treated cells when compared to control cells ($p \leq 0.05$) (Figure 6, Panel b). ## 4. Discussion In the present study, by using a model of a 3T3-L1 fibroblast cell line differentiated into mature adipocytes, we reported, for the first time, that a mix of polyphenols and micronutrients (A5+) may be useful in preventing obesity and its related complications. A5+ administration reduced the accumulation of intracellular lipids and inhibited adipocytes differentiation during MCE, therefore blunting fat accumulation. Moreover, as reported in our previous studies [16,17], A5+ significantly reduced the release of pro-inflammatory cytokines, including leptin. All these beneficial properties of A5+ were primarily linked to its ability to triggering fat browning, or rather switching white adipose tissue to brown adipose tissue, as demonstrated by an increase in the genes linked with this mechanism and with fatty acid oxidation. At a molecular level, overexpression of UCP1 and the activation of AMPK represented the main thermogenic pathways involved. Recently, we showed that A5+ significantly blunted inflammation in an in vitro model of Parkinson’s disease [17]. This relevant effect was explained, at least in part, by the synergistic and integrative effect of its components that act in different phases of cellular rescue mechanisms against damage and/or cellular stress. Similarly, in obesity, where a low grade of inflammation plays a pivotal role [28], the components of A5+ may induce a preventive and protective effect. The efficacy of the different polyphenols against obesity has been already largely explored and reported [29]. We previously demonstrated that tyrosol, a major polyphenol found in extra virgin olive oil, inhibited adipogenesis by downregulating several adipogenic factors (leptin and aP2) and transcription factors (C/EBPα, PPARγ, SREBP1c, and Glut4) and by modulating the histone deacetylase sirtuin 1 [18]. A study using the same in vitro model of the present research, showed that phenolic acids, including ellagic acid, inhibited lipid accumulation throughout the whole process of adipogenesis differentiation [30]. However, in this study it was remarked that, despite the similar structure of these compounds, they show interactions with different targets when compared to those reported in the previous study; they also exert distinct effects in adipogenesis [30]. Moreover, polydatin, pterostilbene, and honokiol were not tested. Polydatin was shown to reduce body weight in high fat diet (HFD)-fed mice and to downregulate serum levels of triglyceride, low density lipoprotein (LDL), aspartate aminotransferase (AST), and alanine aminotransferase (ALT), and to upregulate high-density lipoprotein (HDL) [31]. In association with the loss of weight, polydatin also reduced levels of pro-inflammatory factors such as IL-6 [31]. On the other hand, pterostilbene significantly ameliorated free fatty acids (FFA)-induced lipid accumulation in HepG2 cells and activated FA β-oxidation to inhibit FA synthesis in HFD-fed mice via AMPK activation [32]. Again, honokiol supplementation promoted the browning of WAT by upregulation of UCP1 and AMPK expression in HFD mice [33]. All these findings are completely in line with the results of the present study and with our hypothesis of the concomitant and interactive effect of the A5+ compounds on the adipogenesis mechanisms. After A5+ treatment, we found a cell cycle arrest in the G2-M phase during adipogenesis, which may be the main cause of the following cascade effect, including reduction in cytokine cellular secretion. Among all pro-inflammatory factors, a significant decrease in leptin release was found. This may have an important consequence since leptin is a primary adipokine linked to mechanisms leading to obesity and its complications regulating body mass via negative feedback between adipose tissue and hypothalamus. [ 28,34]. In turn, the reduction of IL-6 and CXCL9, that increase the concentration of FFA [35], may drive the regulation of mitochondrial FA metabolism. The ultimate protective step of induced by A5+ is the promotion of fat browning. This is a complex process in which gut microbiota also plays an important role [36]. BAT includes several cells, such as pre-adipocytes, stem progenitor cells, and immune cells, and has anti-inflammatory action through the ability to dissipate energy in the form of heat, primarily mediated by UCP1 [8]. In obesity, BAT function is negatively affected by inflammatory mediators, such as high levels of cytokines. For this reason, anti-inflammatory supplementation, even natural, has already been proposed to preserve it [5]. Here we found that treatment with A5+ increases the expression of the main genes involved in fat browning and in FA oxidation. These processes control adipose tissue thermogenesis [8]. UCP1 generates a heat dissipating energy proton gradient from the electron transport chain in mitochondrial respiration [37]. The increase in cellular respiration has favorable effects on other cellular pathways such as AMPK-ATGL, which, in turn, are pivotal to activate central and peripheral beneficial effects of BAT [9]. Here, we demonstrated either an increase of UCP1 and AMPK-ATGL expression after A5+ treatment. Interestingly, AMPK has already been shown to be positively modulated by other polyphenols, such as resveratrol [9]. The beneficial effect of minerals dissolved in A5+ (zinc, selenium, and chromium) against obesity has been largely demonstrated. Recently, the levels of these elements were found to be significantly reduced when measured in blood serum, hair, and urine of obese adult patients, demonstrating their predictive role in obesity and the helpful impact of their adequate replacement therapy [38]. ## 5. Conclusions In conclusion, in the present article we found that a natural product composed of highly bioavailable polyphenols and minerals may help in preventing some cellular processes associated with obesity, primarily by reducing cellular lipid accumulation and by increasing fat browning through enhancement of mitochondrial respiration and fatty acid oxidation (Figure 7). Further studies in this important field are necessary to understand how to counteract this pandemic disease.
# A Comparative Analysis of Treatment-Related Changes in the Diagnostic Biomarker Active Metalloproteinase-8 Levels in Patients with Periodontitis ## Abstract Background: Previous studies have revealed the potential diagnostic utility of aMMP-8, an active form of MMP-8, in periodontal and peri-implant diseases. While non-invasive point-of-care (PoC) chairside aMMP-8 tests have shown promise in this regard, there is a dearth of literature on the evaluation of treatment response using these tests. The present study aimed to investigate treatment-related changes in aMMP-8 levels in individuals with Stage III/IV—Grade C periodontitis compared to a healthy control group, using a quantitative chairside PoC aMMP-8 test, and to determine its correlation with clinical parameters. Methods: The study included 27 adult patients (13 smoker, 14 non-smoker) with stage III/IV-grade C periodontitis and 25 healthy adult subjects. Clinical periodontal measurements, real-time PoC aMMP-8, IFMA aMMP-8, and Western immunoblot analyses were performed before and 1 month after anti-infective scaling and root planing periodontal treatment. Time 0 measurements were taken from the healthy control group to test the consistency of the diagnostic test. Results: Both PoC aMMP-8 and IFMA aMMP-8 tests showed a statistically significant decrease in aMMP-8 levels and improvement in periodontal clinical parameters following treatment ($p \leq 0.05$). The PoC aMMP-8 test had high diagnostic sensitivity ($85.2\%$) and specificity ($100.0\%$) for periodontitis and was not affected by smoking ($p \leq 0.05$). Treatment also reduced MMP-8 immunoreactivity and activation as demonstrated by Western immunoblot analysis. Conclusion: The PoC aMMP-8 test shows promise as a useful tool for the real-time diagnosis and monitoring of periodontal therapy. ## 1. Introduction Periodontitis is a chronic inflammatory disease that affects the tissues that support the teeth and is extremely prevalent in the community [1,2]. The pathogenic evolution of the dysbiotic microbial structure in the dental biofilm is among the most crucial factors in the onset and progression of periodontal disease. This process then leads to the continuation of tissue destruction as a result of the host response’s non-physiologic overreaction [2]. Periodontitis, one of the most common causes of tooth loss, is not only limited to local tissues but has been linked to a variety of systemic diseases, including diabetes, cardiovascular disease, cancer, and Alzheimer’s disease [3,4,5,6,7,8]. As a result, early diagnosis of the inflammatory periodontal disease process is critical in preventing tissue destruction [9,10]. Matrix metalloproteinases (MMPs), a family of genetically distinct but structurally related proteases that can degrade almost all extracellular matrix (ECM) structures, play an important role in tissue destruction caused by degenerative periodontal diseases [11]. MMPs can also process non-matrix bioactive molecules affecting immune responses [12]. These non-matrix bioactive molecules include, but are not limited to serpins, pro- and anti-inflammatory cytokines and chemokines, growth factors, complement components and insulin-receptor, and thereby MMPs can modify immune responses and systemic diseases [10,12]. Currently, 23 MMPs have been found to be expressed (released) in humans. MMP-8, also known as collagenase-2, is a pro-enzyme that is primarily derived from neutrophils [10]. It can be activated by microbial virulence factors, proinflammatory cytokines, and reactive oxygen species. Numerous studies have focused on MMP-8 as a diagnostic biomarker for periodontal diseases, and it has been found in oral fluids, such as mouth rinse, saliva, gingival crevicular fluid (GCF), and peri-implantitis sulcular fluid (PISF) [10]. MMP-8 levels in these fluids have been shown to correlate with the severity of periodontal and peri-implant diseases [10,13,14,15,16,17]. MMP-8 is produced and expressed during the neutrophils’ development and maturation in the bone marrow and is stored in subcellular neutrophilic granules in a latent state. When infection-induced inflammatory periodontal and peri-implant diseases appear, the process of selective degranulation and extracellular proMMP-8 release and activation begins [12,18,19]. MMP-8 has been found to be the most common collagenolytic protease in the diseased periodontium and peri-implantium [10,20,21,22,23]. The active form of MMP-8 which is called the active MMP-8, or aMMP-8, is the main mediator of the active tissue destruction process in inflammatory periodontal and peri-implant diseases [10,22]. The aMMP-8 levels in intraoral fluids (mouth rinse, saliva, i.e., GCF and PISF) have been found to rise in inflammatory periodontal and peri-implant diseases, i.e., [23,24,25]; aMMP-8 is regarded to be among the key biomarkers that play an important role in the diagnosis of periodontal and peri-implant diseases and has been implemented as a biomarker into the new classification of these diseases [10,12,17,21,26]. Traditional methods for diagnosing periodontal diseases include bleeding on probing, clinical attachment level measurement, probing depth, and radiographic findings [2,27]. Classical periodontal examination methods can be painful for the patient, and they are time-consuming procedures that must be repeated in all follow-up processes after periodontal treatment, which adds to bacteremia [2]. Furthermore, probing related evaluations such as bleeding on probing, pocket depth, and so on may not yield objective results due to a variety of factors, such as the force applied by the examiner and the characteristics of the periodontal probe, etc. Hence, the classical clinical assessments have been regarded to be at least partially erroneous [2,18,27,28]. On the other hand, radiographic examination methods can only provide information about the destructive effects of periodontal disease which have occurred in the past [2,29]. When considered a stand-alone evaluation criterion, bleeding on probing (BOP) values, which are considered as the gold standard for assessing periodontal disease activity, may thus be ineffective in diagnosing active periodontitis [30]. A number of longitudinal studies have also shown that BOP alone is not a good predictor of periodontal tissue destruction in treated cases [31,32]. Several studies have shown that a chairside PoC aMMP-8 test could be more effective in the diagnosis of subclinical periodontal diseases compared to BOP [33,34,35,36,37]. There are a few studies characterizing the periodontal treatment-related changes of aMMP-8, which give promising results about periodontal disease activity and evaluating its correlation with other oral biomarkers in the literature [23,24,38]. The present study aimed to investigate treatment-related changes in aMMP-8 levels in individuals with periodontitis using quantitative a chairside PoC aMMP-8 test and its on-line and real-time quantitative correlation with the studied clinical periodontal parameters. Consistency characteristics of the diagnostic tests were evaluated. The aMMP-8 levels and molecular forms were also assessed by IFMA and Western immunoblotting analysis, respectively. ## 2.1. Study Population and Design The study design is presented in Figure 1. A total of 27 patients visiting a private clinic “Özel Fulya Ağız ve Diş Sağlığı Kliniği” in Tekirdağ, Turkey for their periodontal problems were recruited in the present study. The study was approved by the Biruni University Ethics Committee (2015-KAEK-71-22-06) and was carried out according to the principles of the Declaration of Helsinki. Oral and written consent was obtained from all recruited subjects. The inclusion criteria for the study were: interdental clinical attachment loss: ≥5 mm (at the site of greatest loss), detection of radiographic bone loss extending beyond $33\%$ of the root, tooth loss due to periodontitis: ≤4 teeth (Stage III Periodontitis), ≥5 teeth (Stage IV Periodontitis). Patients with Acquired Immune Deficiency Syndrome (AIDS), uncontrolled diabetes (HbA1c > 7), and other immune-system-related chronic diseases (Crohn’s disease, etc.) were excluded from the study. Pregnant or lactating females and individuals who had received periodontal treatment within the last year were also excluded. A total of 25 systemically and periodontally healthy dental students from the University of Helsinki, Finland served as healthy controls. ## 2.2. Periodontal Examination Procedure Comprehensive periodontal examination was performed at baseline and 1 month following periodontal treatment by a single periodontist (M.K.). Probing depths (PD) were measured at six sites of each tooth with a Williams color-coded Michigan probe. Plaque index was recorded by assigning a score of 0–3 to each surface, and average oral plaque score was calculated for each patient [39]. The percentage of bleeding on probing (BoP) was determined after probing depth measurements. Gingival margin levels (GML) were determined by taking the enamel–cement junction (ECJ) into account during probing depth measurements. The areas where the free gingival margin ended at the apical of the EJC were recorded as positive values, and the areas where the free gingival margin terminated at the coronal point were recorded as negative values. Clinical attachment levels for each site were determined as the sum of GML and PD. ## 2.3. Periodontal Treatment Procedure Periodontal treatment was carried out by a specialist periodontist (M.K.). Initially, cause-related therapy, including full-mouth scaling and root planing procedures, were performed along with oral hygiene instructions. At 2 weeks following the non-surgical phase of the periodontal therapy, periodontal sites associated with irregular bony contours, angular defects, or pockets in which a complete access with non-surgical periodontal therapy was not possible, such as grade II–III furcation defects, were treated with open flap debridement. Patients who underwent the surgical phase of treatment were prescribed amoxicillin plus clavulanic acid (1gr/day) and chlorhexidine mouth rinse ($0.12\%$) twice a day for 7 days and recalled thereafter for suture removal. All patients were re-evaluated clinically 1 month following treatment. ## 2.4. Quantitative Chairside PoC aMMP-8 Analyses Levels of aMMP-8 were measured quantitatively using rapid PoC chairside aMMP-8 kits (Periosafe®, Dentognostics GmbH, Solingen, Germany) and a quantitative spectrometer analyzer (Oralyzer®, Dentognostics GmbH, Solingen, Germany) on mouth rinse samples collected before treatment and 1 month following periodontal treatment. To perform a comparative analysis with the periodontitis patient group, analysis of aMMP-8 was also conducted on the healthy control group at T0 (baseline). PoC chairside aMMP-8 analyses were performed prior to clinical measurements, and manufacturer’s instructions were followed. It was recommended that patients and controls not eat for 1 h before analyses. First, the patients and controls were instructed to rinse their mouths with clean water (drinking or distilled water) for 30 s and spit it out. After a waiting period of 1 min, they were told to rinse their mouths for 30 s with 5 mL of distilled water in the aMMP-8 kit (Periosafe®) and spit it back into the container. Then, 3–4 drops were taken from the container with a sterile syringe and poured into the well on the test cassette provided in the aMMP-8 kit. Immediately after that, the cassette was transferred to the digital spectrometer device (Oralyzer®) and quantitative results were obtained after 5 min. The remaining liquid in the container was transferred to Eppendorf tubes and stored at −70 °C for further laboratory analysis. ## 2.5. Measurement of the aMMP-8 Levels by Immunofluorometric Assay (IFMA) The aMMP-8 level from mouth rinse samples was determined by a time-resolved immunofluorescence assay (IFMA) as described by Öztürk et al. [ 40]. Briefly, aMMP-8-specific monoclonal antibodies 8708 and 8706 (Actim Oy, Espoo, Finland) were used in the analysis as a catching antibody and a tracer antibody, respectively. In this protocol, the diluted samples were allowed to incubate for 1 h with the Europium labelled tracer antibody. The fluorescence was measured using an EnVision 2015 multimode reader (PerkinElmer, Turku, Finland). ## 2.6. Western-Immunoblotting Testing Procedure The molecular forms of MMP-8 were detected from mouth rinse samples by a modified enhanced chemiluminescence (ECL) Western blotting kit according to protocols recommended by the manufacturer (GE Healthcare, Amersham, UK) as described earlier by Rautava et al. [ 41]. Briefly, the proteins of mouth rinse samples were first separated by electrophoresis and then electro-transferred onto nitrocellulose membranes Protran (Whatman GmbH, Dassel, Germany). The membranes were incubated overnight with monoclonal primary antibodies anti-MMP-8 [42] and then with horseradish peroxidase-linked secondary antibody (GE Healthcare, Buckinghamshire, UK) for 1 h. The membranes were washed 4 times in TBST between each step for 15 min. The proteins were visualized using the ECL system according to protocol. The recombinant human MMP-8 (100 ng, Calbiochem, Darnstadt, Germany) was used as a positive control. ## 2.7. Statistical Analysis All periodontal parameters, including probing depth, bleeding on probing, plaque index, and clinical attachment level were examined before periodontal treatment and 1 month following anti-infective periodontal treatment. Normality tests were performed to test the normality of the data before calculating paired samples t-tests. Paired-samples t-test was used to analyze the statistically significant differences between these two phases. The effect of smoking on aMMP-8 levels was tested with repeated measures ANOVA. A $p \leq 0.05$ was accepted as statistically significant value. Receiver operating characteristic (ROC) analysis and the area under the ROC curve (AUC) were used to examine the diagnostic accuracy of aMMP-8 to classify periodontitis and periodontally healthy subjects. In order to identify optimal cut-offs from the ROC curves, the Youden Index was used for calculating diagnostic sensitivity and specificity (Se and Sp). ## 3.1. Study Population A total of 27 periodontitis patients (4 = Stage III, 23 = Stage IV, 27 = Grade C) and 25 healthy control subjects were enrolled in the study. Ages of periodontitis patients ranged between 30 and 70 years. All healthy subjects were younger (age range 23 to 25 years) than the study group ($p \leq 0.01$). Demographic characteristics of periodontitis patients and healthy control subjects are shown in Table 1. ## 3.2. Clinical Periodontal Parameters All periodontitis patients were subjected to non-surgical periodontal therapy followed by open flap debridement in seven of them. Statistically significant improvements following anti-infective treatment were observed for all periodontal parameters ($p \leq 0.001$) (Table 2). Scatter plot diagrams of the relationship between probing depths, bleeding on probing, clinical attachment level, and plaque indices before periodontal treatment and after anti-infective periodontal treatment are presented in Figure 2. The clinical parameters as well as aMMP-8 levels of the periodontitis patients reduced to levels close to that of healthy subjects following periodontal therapy (Table 2). Both non-smoker and smoker subjects showed statistically significant decreases in terms of inflammatory clinical parameters ($p \leq 0.001$). A similar clinical healing pattern was observed in both groups (Figure 3). ## 3.3. aMMP-8 Results A statistically significant decrease in oral rinse aMMP-8 levels following anti-infective periodontal treatment was observed regarding both Oralyzer® and IFMA results and in correlation with bleeding on probing ($p \leq 0.05$) (Table 2 and Figure 4). Both Oralyzer® and IFMA results indicated a similar pattern of decrease in terms of oral rinse aMMP-8 levels, and it was also observed that smoking did not have a significant effect on aMMP-8 PoC testing (Figure 4 and Figure 5) ($p \leq 0.05$). An ROC analysis was used for analyzing the diagnostic ability of aMMP-8 PoC and IFMA tests to discriminate patients with periodontitis (before treatment) from healthy controls (Figure 6). AUC was also calculated and showed excellent discrimination ability between periodontitis and periodontally healthy groups (aMMP-8 POC test = 0.963; $95\%$ CI: 0.904–1.000; $p \leq 0.001$ and aMMP-8 IFMA test = 0.975; $95\%$ CI: 0.941–1.000; $p \leq 0.001$). Optimal cut-offs for aMMP-8 POC and IFMA tests were estimated by Youden’s Index (aMMP-8 POC test: 20.0 ng/mL; sensitivity: 0.852; specificity: 1.000; aMMP-8 IFMA test: 43.20 ng/mL; sensitivity: 0.926; specificity: 0.920). With the cut-off set at 20 ng/mL, pretreatment sensitivity was $85.2\%$ and post-treatment sensitivity was $81.5\%$; $85.2\%$ (23 out of 27) of study subjects were aMMP-8 positives (>20 ng/mL), and $78.3\%$ (18 out of 23) of aMMP-8 positive patients were converted to aMMP-8 negatives (<20 ng/mL) following periodontal therapy. With the cut-off set at 10 ng/mL, pretreatment sensitivity was $100\%$. All (27 out of 27) study subjects were aMMP-8 positives (>10 ng/mL), and $43.4\%$ (10 out of 23) aMMP-8 positive subjects converted to aMMP-8 negatives (<10 ng/mL) following therapy(Table 3). ## 3.4. Western Immunoblotting Analysis Results Representative Western immunoblot analysis and aMMP-8 POC-test outcomes of MMP-8 in the studied mouth rinse samples from orally and systemically healthy and diseased study subjects are shown in Figure 7. MMP-8 was in latent form in the healthy sample (Figure 7A, Lane 2), and in the diseased samples it was converted to active and fragmented forms (Lane 3) as analyzed by monoclonal anti-MMP-8 antibody (Figure 7A). Negative (−, <20 ng/mL) and positive (+, ≥20 ng/mL) aMMP-8 POC-test outcomes are shown in Figure 7B. ## 4. Discussion Periodontal diseases are chronic inflammatory conditions affecting the supporting tissues of the teeth [1]. One of the key enzymes involved in the breakdown of these tissues is matrix metalloproteinase-8 (MMP-8). While MMP-8 is important for normal tissue remodeling and repair, excessive or uncontrolled production of this enzyme can lead to tissue destruction and the progression of periodontal diseases. Recent studies have focused on the use of aMMP-8 as a biomarker for periodontal diseases; aMMP-8 refers to the active form of MMP-8, which is produced by neutrophils and other inflammatory cells in response to bacterial infection. Elevated levels of aMMP-8 have been linked to increased tissue destruction and disease progression in periodontal diseases, making it a valuable diagnostic and prognostic tool for these conditions [12]. Furthermore, aMMP-8 has been shown to be a more specific marker for active periodontal disease than total MMP-8, which can be found in both active and inactive forms [12,21]. The present study aimed to evaluate treatment-related changes of mouth rinse aMMP-8 levels by using PoC aMMP-8 kits and Oralyser-reader, which is a non-invasive method that rapidly and quantitatively produces chairside on-line and real-time results. Both chairside PoC aMMP-8 tests and IFMA aMMP-8 laboratory analysis confirmed that pre-treatment mouth rinse aMMP-8 levels were clearly higher than mouth rinse levels of patients after 1 month following periodontal treatment. Our study provides valuable insights into the potential use of PoC chairside aMMP-8 tests and IFMA aMMP-8 laboratory analysis in the diagnosis and post-treatment follow-up of periodontal diseases. However, there are several limitations that should be considered when interpreting the results. Firstly, the small sample size could limit the generalizability of our findings. Secondly, the short follow-up period of only 1 month limits the assessment of the effectiveness of these techniques over time. On the other hand, the absence of periodontally healthy smokers in our study groups can be considered a limitation in comparative evaluations. Despite these limitations, our study provides important insights into the potential use of PoC chairside aMMP-8 tests and IFMA aMMP-8 laboratory analysis in the diagnosis and post-treatment follow-up of periodontal diseases. This study utilized both the aMMP-8 PoC chairside aMMP-8 test and the aMMP-8 IFMA measurements that utilize the same monoclonal antibodies (Sorsa T et al., US patent no: US10488415B2). These techniques utilize two monoclonal, i.e., primary or catching antibody and secondary or detection [9,17,23,43,44]. Despite that, they correlate with each other; the techniques produced different values evidencing that both techniques can independently diagnose and differentiate periodontal health and disease. Both techniques can also be applied to monitor the treatment of the disease [12,24,35]. This study thus confirms and further extends the results of several previous studies demonstrating the potential benefits of POC chairside aMMP-8 and IFMA aMMP-8 laboratory analysis in terms of diagnostic distinction between periodontal health and disease [34,36,37,40,45,46,47,48]. Furthermore, our present findings are in accordance with numerous studies linking elevated oral aMMP-8, but not total MMP-8, to active and progressive stages of periodontal and peri-implant diseases [20,23,43,49,50,51,52,53,54,55]. It was previously shown that smokers had significantly higher levels of aMMP-8 in their saliva compared to ex-smokers or non-smokers [17,54]. When the pre-periodontal treatment results were evaluated from a diagnostic point of view, smoking was not found to significantly affect the aMMP-8 PoC testing being in agreement with previous studies on aMMP-8 in oral fluids (Mäntylä et al., 2006). The sensitivity of the test was found to be $85.2\%$ when the cut-off value was determined to be 20 ng/mL. According to a recently published study of Öztürk VÖ et al. [ 40], in which they included Stage III and IV periodontitis patients, diagnostic sensitivity of PoC aMMP-8 was observed as $83.9\%$ [40]. In other studies in which periodontitis and peri-implantitis patients were included and the cut-off value was determined to be 20 ng/mL, it was observed that the aMMP-8 PoC test’s sensitivity ranged between 76–$90\%$ [21]. Clinical periodontal parameters of pre-treatment and 1 month following periodontal treatment revealed statistically significant improvement as predicted and consistent with the literature. [ 56,57]. The quantitative chairside PoC aMMP-8 and IFMA aMMP-8 laboratory results both demonstrated a statistically significant decrease, correlating with and reflecting well with the clinical findings. There are many studies in the literature reporting a decrease in aMMP-8 levels following periodontal treatment [10,12,24,25,48,49,58]. While MMP-8 in its latent form was detected more frequently in the healthy state [53], the release of degranulated aMMP-8, its activated form, increases with periodontal and peri-implant inflammation and disease severity [12,23,54,55]. The statistical decrease in aMMP-8 levels post-periodontal treatment suggests that active tissue destruction, along with clinical disease activity, is reduced, confirming the role of MMP-8 in periodontitis pathogenesis [10,12,59]. When analyzing the clinical results, it becomes clear that factors, such as deep periodontal pockets, bleeding on probing (BOP), and oral hygiene, are strongly linked. However, despite treatment, not all patients were able to achieve complete oral health status as these parameters did not return to normal levels in all cases. Furthermore, it was observed that the post-treatment mouth rinse aMMP-8 levels (in both IFMA and PoC chairside aMMP-8 Tests) were higher than health-associated levels. In the study of Umeizudike et al., it was found that in the sixth month post-periodontal treatment, the aMMP-8 levels did not reach close to that of the healthy control group [48]. Literature data further suggests that individuals with gingivitis may have elevated aMMP-8 IFMA levels and aMMP-8 release may persist in the periodontal sites that respond poorly to treatment [25,35,36]. Periodontally and systemically healthy dental students without any periodontal disease experience and activity all had negative ([-], <20 ng/mL) aMMP-8 levels. This finding was also compatible with the literature which further affirms that Periosafe PoC aMMP-8 test negativity can be regarded as a biomarker of periodontal and peri-implant health [40]. Since this study includes a 1-month follow-up, the clinical and biochemical findings might not have reached the level of complete health due to persistent gingival inflammation and residual periodontal pockets. There are studies in the literature that suggest that the post-treatment re-evaluation period ranges from 2 weeks to 6 months [59, 60]. Morrison et al. state that the severity of periodontitis can be significantly reduced during the 1-month post-periodontal treatment follow-up process. However, the oral hygiene process must be fully ensured to determine the ongoing treatment need [60]. When comparing aMMP-8 cut-off 20 ng/mL vs. 10 ng/mL [34,36], we found that especially after treatment, the periodontal health targeting was reduced with a 10 ng/mL cut-off. Deng et al. [ 36] used 10 ng/mL as the diagnostic cut-off value, but it should be remembered that it is not recommended by the manufacturers [21,35]. Our present results provide further support for the use of 20 ng/mL as the diagnostic cut-off value for aMMP-8 PoC tests [23,24,34,35]. Laboratory analysis of immunological inflammatory factors is considered to be the gold standard [61,62,63]. The results of PoC chairside aMMP-8 tests were consistent with those of IFMA aMMP-8 analyses, indicating that non-invasive PoC aMMP-8 analysis [12,21,34,45,64] can make a potential contribution regarding the diagnosis and periodontal therapy follow-up. However, there is a need for more longitudinal studies on the functionality of PoC chairside aMMP-8 analyses in periodontal treatment and its follow-up. ## 5. Conclusions Observation of alarmingly high mouth rinse aMMP-8 levels in individuals with periodontitis through both point-of-care aMMP-8 and IFMA aMMP-8 analyses, and their significant decrease after anti-infective periodontal treatment, highlights the practical utility of the point-of-care aMMP-8 test for real-time diagnosis and monitoring of periodontal treatment progress. ## 6. Patents TS is the inventor of U.S. patents 1,274,416, 5,652,223, 5,736,341, 5,864,632, 6,143,476 and US $\frac{2017}{0023571}$A1 (issued 6 June 2019), WO $\frac{2018}{060553}$ A1 (issued 31 May 2018), 10,488,415 B2, and US $\frac{2017}{0023671}$A1, Japanese Patent 2016-554676 and South Korean Patent No. 10-2016-7025378.
# Groove Pancreatitis—Tumor-like Lesion of the Pancreas ## Body A 45-year-old male smoker with a past history of severe chronic alcoholism presented to our gastroenterology department with a 2-month history of intermittent episodes of upper abdominal pain radiating to the back, nausea, postprandial vomiting, and poor appetite that persisted for 3 months, followed by a 10 kg weight loss. The patient reported no history of hypertension, previous abdominal surgery or diabetes mellitus. His family and drug history were unremarkable. Physical exams were unremarkable except for bilateral upper quadrant abdominal tenderness and hypoactive bowel sounds, while no abdominal mass was identified. Laboratory results showed that hemogram, amylase, lipase, albumin, renal and liver function tests were within normal limits. A tumor marker test found a slightly increased level of carbohydrate antigen 19-9 (CA 19-9) at 40 U/mL (normal range ≤ 30 U/mL). Carcioembryonic antigen (CEA) and alpha-fetoprotein (AFP) levels were both normal. US demonstrated mild hepatic steatosis with no cholelithiasis or acute cholecystitis but revealed general thickening of the second part of the duodenum and voluminous pancreas head. Esophagogastroduodenoscopy showed a narrowed second part of the duodenum due to an irregular, edematous, “reddish” polypoidal-appearance mass rising at the D1–D2 junction with intact extending mucosa (Figure 1). Additionally, histological examination of the pseudo-polypoid biopsy specimen revealed chronic and active mucosal inflammation and edema in the mesenchyme and was negative for malignancy. A computed tomography (CT) of the abdomen, pre-contrast phase (Figure 2), arterial phase (Figure 3) and portal venous phase (Figure 4) revealed duodenal wall thickening with luminal narrowing. It was noted that the contrast intake was parenchymal in nature and relatively homogeneous without any accompanying cystic forms. The periduodenal adipose tissue and the area of the duodenal–pancreatic groove were infiltrated with minimal adjacent fluid. Minimum densification of the right anterior pararenal fascia was observed. The cephalic pancreatic area was slightly swollen but with a relatively homogeneous acinar structure. No parenchymal calcifications or cysts were visible. The body and tail of the pancreas were healthy. The Wirsung duct and biliary system were normal as well. The pancreaticoduodenal artery was permeable and interposed between the head of the pancreas and the thickened duodenal wall. Ascites were not reported, but some pericephalic pancreatic and periduodenal lymphadenopathy, likely inflammatory, was seen. Nonetheless, there was still a concern of malignancy due to the position of the mass. Endoscopic ultrasound (EUS) was performed (Figure 5), which described mass-like growth of the pancreatic head, narrowed duodenal wall and associated stenosis, but revealed no common bile duct (CBD) stricture or dilatation of the pancreatic duct system. EUS-FNA was performed from the exceptionally thickened duodenal wall and the groove area, which showed exclusively inflammatory changes, but no malignant or dysplastic cells. The imaging appearances (US, CT, and EUS), clinical presentation, medical history of alcohol abuse, laboratory markers and cytology results of EUS FNA were highly suggestive of GP, so major unneeded surgery was avoided in this early phase of the disease. In the absence of extreme complications (biliary obstruction or crucial gastric outlet obstruction), our patient was treated by conservative medical measures (proton pump inhibitors (PPI) and pancreatic enzyme supplement, as well as avoidance of alcohol). After being released from the hospital, the patient stopped drinking, followed a low-fat diet, and experienced no further symptoms for six months. First described by Becker and Mischke in 1973, GP is an infrequent and still under-recognized type of recurrent or chronic pancreatitis that involves the anatomic space between the head of the pancreas, the common bile duct (CBD) and the duodenum, the so-called groove area [1]. Becker defined two forms of GP: “segmental” and “pure”. The first affects the pancreatic head with development of scar tissue within the groove, while the second involves exclusively the groove itself, sparing the pancreatic head [1]. Diagnosis is frequently challenging, and many physicians are not familiar with the disorder, which possibly contributes to its low incidence [1,2]. The accurate cause of this disorder has yet to be determined. Blockage of the minor papilla is one of the discussed aspects. Brunner gland hyperplasia is similarly thought to be a source, with stasis of pancreatic enzymes in the dorsal pancreas. Heterotopic pancreatic alterations undergoing fibrosis and inflammation in the groove area have been implicated. The most essential association is described to be an extended history of alcohol consumption. Continuous alcohol intake intensifies protein volume, which causes an escalation in pancreatic fluid thickness, provoking the inflammatory response [1,2]. In various studies, no difference was found in age and gender dispersion among GP and common chronic pancreatitis [1,2]. GP is generally recognized in middle-aged men with a history of significant alcohol abuse [2,3]. Clinically, patients present with chronic intermittent post-prandial abdominal pain similar to chronic pancreatitis. Some of them may present recurrent nausea, postprandial vomiting, frequently severe weight loss from impaired intestinal mobility, and duodenal stenosis [2,4]. Jaundice is infrequent in GP, contrary to pancreatic carcinoma, which presents with progressive jaundice. The duration of the clinical symptoms fluctuates from a few weeks to more than one year. The course of the GP is often chronic and debilitating [2,3,4,5]. Laboratory data often show little elevation of serum pancreatic enzymes and periodically of serum hepatic enzymes. Bilirubin levels can be high if the CBD is obstructed, and alkaline phosphatase levels can also be elevated despite the nonappearance of ductal reduction. CEA, AFP and CA 19-9 tumour markers are barely elevated [2,6,7]. Imaging plays a fundamental aspect in recognizing this entity. The literature on sonography has barely reported the appearance of GP. US commonly reveals a hypoechoic mass with thickening of the duodenal wall [5,6]. CT scans generally show mural thickening of the duodenal wall or a hypodense, insufficiently enhanced mass between the head of the pancreas and a thickened wall of the duodenum. Supplementary data include distension of the head of the pancreas and irregular calcifications. CBD may be narrowed with a smooth, tapered, and constant stenosis [8,9,10]. The most typical finding on magnetic resonance (MR) is a sheet-like mass corresponding to the fibrous scar in the groove among the head of the pancreas and the duodenum. MR imaging generally presents a hypointense mass on T1-weighed MR images in comparison with the pancreatic parenchyma and is iso- or slightly hyperintense on T2-weighed MR images. By contrast, administration enhancement is principally postponed due to the presence of fibrous tissues. Cystic lesions of the groove or duodenal wall may be noticed, especially on T2-weighted images. Duodenal wall thickening and duodenal wall stenosis are also commonly observed [11]. Irie et al. and Ferreira et al. reported the MRI features of patients with GP with the above MRI findings, and histological analysis revealed that these imaging features correlated with fibrous scarring in each patients [12,13]. Magnetic resonance cholangiopancreatography (MRCP) helps separate GP from CBD carcinoma, as GP shows smooth CBD tapering and shouldering is uncommon [5,8]. Esophagogastroduodenoscopy is also necessary as it can identify a congested and polypoid mucosa of the duodenum, with narrowing of its lumen or buldging of duodenal bulb [5,8,9]. Biopsies of the duodenal mucosa mostly report an incomplete result or an active inflammatory reaction without any evidence of neoplastic lesions [5,9]. Valentini et al. reported the gastrointestinal endoscopy features of their patient with GP with the above-mentioned findings [14]. The probability of accumulating samples from suspicious lesions using EUS-FNA makes EUS an ideal procedure to distinguish pancreatic adenocarcinoma from GP, allowing a diagnosis by cytopathology in approximately $90\%$ of cases [5,9,15]. EUS can reveal narrowing and thickening of the second portion of the duodenum with intramural cysts, mild thickening of the CBD, heterogeneous hypoechoic mass and enlargement of the pancreatic head, with calcifications or pseudocysts. Regular narrowing of the CBD is seen in GP, while intermittent ductal narrowing with obstructive jaundice is seen in pancreatic adenocarcinoma. EUS FNA biopsy demonstrates enormous variability depending on the area sampled, and the presence of cytological features related to reactive cellular atypia resulting from pancreatitis may simulate malignancy [5,9,15]. To our knowledge, no studies compare EUS-FNA to FNB, specifically in GP. However, currently, Wong et al. [ 15] analyzed the diagnostic performance of EUS-guided tissue acquisition by EUS-guided FNA vs. EUS-guided FNB for solid pancreatic mass, and they established that the diagnostic yield of the solid pancreatic mass was higher in FNB than in FNA (94.6 vs. $89.6\%$). Radiologically, inflammatory modification in the groove between the duodenum and the pancreatic head can look indistinguishable from a malignancy. Nevertheless, it is crucial to recognize the integral clinical picture and the patient’s symptoms. A significant characteristic is the absence of major vessel encasement in GP, although some displacement may be noticed. Graziani et al. [ 16] described that the gastroduodenal artery is luxated leftward in GP while, in carcinoma, it is situated between the lesion and the duodenum [4,16]. Pancreatic adenocarcinoma spreading to the peripancreatic tissue or the duodenum is anticipated to penetrate and occlude peripancreatic vessels [4,17]. Ishigami et al. [ 18] described that patchy central enhancement in the portal venous phase is most evocative of GP, occurring in $93\%$ of patients. Patchy central enhancement reveals pancreatic tissue in the inflammatory mass. In the same report, peripheral enhancement was only noticed in GP carcinomas. Cystic lesions in the groove are more frequent in GP than in pancreatic carcinoma [19]. A younger age is also more suggestive of GP [1,5]. Pancreatic adenocarcinomas are much more likely to invade the retroperitoneum and involve the vasculature, which was not the case with our patient. A unique finding of GP is a thickening of the medial duodenal wall, as opposed to pancreatic adenocarcinoma [2,4,5]. In our case, the diagnosis was established on clinical suspicion after a biopsy with EUS suggested an inflammatory growth. The findings that cemented the diagnosis were the lesion’s position, the luminal narrowing of the duodenum, and the minimal post-contrast enhancement of this lesion. The pancreatic duct and CBD were not enlarged, suggesting a benign nature. When the diagnosis is obvious, GP can be treated by conservative medical measures, including endoscopic therapy as the first line of intervention. Abstinence from alcohol, pancreatic rest, and opioid analgesics are the most commonly used conservative measures. While conservative management is preferred, resection is the gold standard in the appearance of obstructing manifestations or any suspicion of malignancy [4,5]. Therefore, it becomes essential to consider this entity as a potential and close the second differential to pancreatic carcinoma. Currently, in a review article, seven patients received endoscopic therapy, which was considered a reasonable treatment method [20]. In some studies, the primary line of management was pain management, which was mandatory in relatively half of the subjects [21]. These outcomes were identical to those found in other articles, which also revealed that conservative management was successful in half of the patients [22]. In one large retrospective case series using the endoscopic approach, linked with medical treatment, total clinical success in approximately $70\%$ of patients was obtained in five years [23]. Still, prospective, controlled studies are needed to confirm these findings. GP should routinely be considered in the differential diagnosis for patients presenting with pancreatic head enlargements with no cholestatic jaundice, mainly when a duodenal obstruction is present and neither duodenal biopsies nor pancreatic head FNA establishes adenocarcinoma. It is fundamental for physicians to become more acquainted with clinical, paraclinical and imaging findings that are evocative of GP because it can imitate pancreatic malignancy, whose prognosis and management are entirely different. Therefore, this report aims to make this entity and hidden anatomical area more recognizable to clinicians, creating a conclusive imaging diagnosis and decreasing further diagnostic work-up such as unnecessary surgeries and delayed diagnosis. ## Abstract Groove pancreatitis (GP) is an uncommon appearance of pancreatitis represented by fibrous inflammation and a pseudo-tumor in the area over the head of the pancreas. The underlying etiology is unidentified but is firmly associated with alcohol abuse. We report the case of a 45-year-old male patient with chronic alcohol abuse who was admitted to our hospital with upper abdominal pain radiating to the back and weight loss. Laboratory data were within normal limits, except for the level of carbohydrate antigen (CA) 19-9. An abdominal ultrasound and computed tomography (CT) scan revealed swelling of the pancreatic head and duodenal wall thickening with luminal narrowing. We performed an endoscopic ultrasound (EUS) with fine needle aspiration (FNA) from the markedly thickened duodenal wall and the groove area, which revealed only inflammatory changes. The patient improved and was discharged. The principal objective in managing GP is to exclude a diagnosis of malignancy, whilst a conservative approach might be more acceptable for patients instead of extensive surgery.
# Serious Clinical Outcomes of COVID-19 Related to Acetaminophen or NSAIDs from a Nationwide Population-Based Cohort Study ## Abstract Acetaminophen and non-steroidal anti-inflammatory drugs (NSAIDs) have been widely prescribed to infected patients; however, the safety of them has not been investigated in patients with serious acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection. Our objective was to evaluate the association between the previous use of acetaminophen or NSAIDs and the clinical outcomes of SARS-CoV-2 infection. A nationwide population-based cohort study was conducted using the Korean Health Insurance Review and Assessment Database through propensity score matching (PSM). A total of 25,739 patients aged 20 years and older who tested for SARS-CoV-2 were included from 1 January 2015 to 15 May 2020. The primary endpoint was a positive result for a SARS-CoV-2 test, and the secondary endpoint was serious clinical outcomes of SARS-CoV-2 infection, such as conventional oxygen therapy, admission to the intensive care unit, need for invasive ventilation care, or death. Of 1058 patients, after propensity score matching, 176 acetaminophen users and 162 NSAIDs users were diagnosed with coronavirus disease 2019. After PSM, 162 paired data sets were generated, and the clinical outcomes of the acetaminophen group were not significantly different from those of the NSAIDs group. This suggests that acetaminophen and NSAIDs can be used safely to control symptoms in patients suspected of having SARS-CoV-2. ## 1. Introduction As of December 2019, a new coronavirus, serious acute respiratory syndrome coronavirus 2 (SARS-CoV-2), poses a global health threat. In January 2020, the World Health Organization named the syndrome coronavirus disease 2019 (COVID-19). About $5\%$ of patients with COVID-19 experience acute respiratory distress syndrome (ARDS), septic shock, or multiple organ failure requiring hospitalization in intensive care unit (ICU) [1]. There are several risk factors for mortality from COVID-19, including older age, smoking, cardiovascular disease, chronic kidney disease, diabetes and obesity, malignancy, chronic HIV infection, and treatment with dexamethasone [2,3,4,5,6,7]. Concerns have been raised about drug use related the risk of COVID-19. However, these concerns have not been fully identified. Acetaminophen (AAP) is a safe analgesic considered as a treatment to reduce fever and chills, which are the first symptoms of COVID-19. AAP and non-steroidal anti-inflammatory drugs (NSAIDs) have been widely prescribed to infected patients to control fever, pain, and inflammation [8]. Both are inexpensive, widely available, and have well-described risk profiles. The main mechanism of NSAIDs is the inhibition of cyclooxygenase enzymes by the formation of prostaglandins derivatives from arachidonic acid [9]. Conversely, NSAIDs treatment for community-acquired pneumonia has been known to be related to an increased risk of pleuropulmonary complications [10]. However, their safety in SARS-CoV-2 patients has not yet been investigated. Our objective was to evaluate the association between prior use of acetaminophen or NSAIDs and the potential influence on susceptibility to SARS-CoV-2 infection and worsening of serious clinical outcomes of COVID-19 by using nationwide COVID-19 data from the Korean National Health Insurance System (NHIS). ## 2.1. Data Sources and Study Subjects Data were obtained from the Korean Health Insurance Review and Assessment Service (HIRA). This large-scale cohort provided data on all individuals tested for SARS-CoV-2 in South Korea through services co-operating with the HIRA, the Prevention and Ministry of Health and Welfare, and the Korean Centers for Disease Control (CDC) from 1 January 2015 to 15 May 2020, and referral to the Korean CDC (excluding self-referral) ($$n = 25$$,739). During the COVID-19 pandemic, the Korean government has provided complementary and compulsory health insurance for all patients with COVID-19. Thus, access to information consisting of personal data, patients’ medical records (including medical visits, prescriptions, diagnoses, and procedures) within 6 years, hospital visits, outcomes related to COVID-19, and death records has been provided in this database of COVID-19. The medical records of all patients were anonymized. ## 2.2. Study Population We defined the first SARS-CoV-2 test data set for each patient as the cohort entry date (individual index date). Of the 25,739 patients tested for SARS-CoV-2, those under the age of 20, with no history of AAP or NSAIDs treatment, or with a history of prescribing AAP and NSAIDs within 2 weeks of the index date were excluded ($$n = 24$$,508). The SARS-CoV-2 infection was defined as a positive real-time reverse transcriptase-PCR (RT-PCR) assay using nasal and pharyngeal swabs according to the World Health Organization (WHO) guidelines [11]. Between 1 January 2015 and 15 May 2020, information on age, sex, and region of residence was extracted from the insurance eligibility data by combining the claims-based data of the National Health Insurance Service. A history of underlying diseases (hypertension—HTN, chronic kidney disease—CKD, cerebrovascular disease—CVA, diabetes mellitus—DM, chronic obstructive pulmonary diseases—COPD, and asthma) was confirmed by submitting at least two claims within one year using the appropriate International Classification of Diseases, 10th revision (ICD-10) code [12]. The Charlson Comorbidity Index (CCI) scores were calculated from the ICD-10 codes using previous methods [12]. The residential region was classified as Seoul, Gyeonggi, Gyeongbuk, Daegu, or other [13]. Drugs used within 30 days prior to the index date included systemic steroids [14]. The final sample included patients who tested for SARS-CoV-2 and were prescribed AAP or NSAIDs, and comprised 1231 individuals, of whom 338 tested positive for SARS-CoV-2 (Figure 1). ## 2.3. Exposure All prescription AAP or NSAIDs were identified within two weeks of the index date. A non-treatment user was defined as a patient who had not been prescribed AAP or NSAIDs within 2 weeks prior to the index date. ## 2.4. Outcomes The primary outcome was defined as a positive result for SARS-CoV-2 test [15]. The secondary outcomes were serious clinical outcomes, including composite endpoint 1 (conventional oxygen therapy, admission to the intensive care unit—ICU, mechanical ventilation, or death). In addition, except conventional oxygen therapy, composite endpoint 2 (ICU admission, mechanical ventilation, or death) was analyzed [3]. We analyzed the period from taking the study medication to clinical outcome in patients with COVID-19. ## 2.5. Ethics Approval This study was approved by the Institutional Review Board of the corresponding author’s hospital. The anonymized data were provided to the authors by NHIS. ## 2.6. Statistical Analysis A logistic regression model was used to adjust for age, gender, and region of residence (Seoul, Gyeonggi, Gyeongbuk, Daegu, or other) by performing two rounds of propensity score matching (PSM) to balance the baseline characteristics of both groups and decline potential confounding factors; history of HTN, CKD, CVA, DM, COPD, or asthma; CCI (0, 1, or ≥2); and current systemic steroids use. We evaluated the PSM of both groups in a 1:1 ratio using a ‘greedy nearest-neighbor’ algorithm and calculated the predicted probability of AAP versus NSAIDs in all patients for SARS-CoV-2 test ($$n = 25$$,739) and AAP versus NSAIDs users among patients with confirmed COVID-19 ($$n = 338$$). Matching adequacy in the absence of major imbalances for each baseline covariate was assessed by comparing the standardized mean difference (SMD) with the distribution of PSM scores, which was more useful than calculating the p-values of the t-tests [12]. The primary endpoint was positive results of the SARS-CoV-2 test. The secondary endpoint was the composite endpoint and serious clinical outcomes of COVID-19 patients. Data were analyzed using a logistic regression model and expressed as adjusted ORs (aOR) with $95\%$ confidence intervals (CI) for both groups after adjusting for potential confounding factors; age, sex, region of residence, history of HTN, CKD, CVA, DM, COPD, or asthma; CCI, and current use of systemic steroids. Further analyses were conducted to establish the robustness of the results; AAP or NSAIDs use was stratified by duration of use. ## 3. Results Of the 25,739 patients who underwent SARS-CoV-2 tests, 1231 patients prescribed either AAP ($$n = 643$$) or NSAIDs ($$n = 588$$) were defined in the complete unmatched cohort. In addition, the baseline characteristics in the entire cohort are shown in Table 1. The mean age of the entire cohort was 55.8 years (±19.7 years), and there were 681 females ($55.3\%$). In the two cohorts, patients taking AAP or NSAIDs were matched in equal numbers ($$n = 529$$). There are no major imbalances in demographic and no clinical characteristics were observed in SMD within groups of the PSM-matched cohorts. The SARS-CoV-2 test positivity rate in patients using AAP was $33.3\%$ ($\frac{176}{529}$) compared to $31.0\%$ ($\frac{162}{529}$) in those using NSAIDs (Table 2). Table 3 shows the baseline characteristics of COVID-19 confirmed patients. We performed a PSM-matched analysis of positive SARS-CoV-2 patients. COVID-19 patients had a concordant history of AAP ($$n = 176$$) and NSAIDs ($$n = 162$$) use. The baseline characteristics of patients with a diagnosis of COVID-19 treated with AAP or NSAIDs are described in Table 3. No major imbalances in demographics and clinical characteristics were noted when evaluated using SMD within the PSM-matched cohort groups in Table 4. The use of AAP was not related to an increased risk of composite endpoint 1 of COVID-19 compared to the use of NSAIDs (Table 5a). The use of AAP and NSAIDs was not significantly associated with an increased risk of serious COVID-19 outcomes (composite endpoint 1) (Table 5b). As shown in Table 6, there were no significant differences in the period from taking the medication to clinical outcomes between the AAP and NSAIDs groups. ## 4. Discussion The present study using a nationwide Korean cohort investigated whether AAP or NSAIDs use increased susceptibility to SARS-CoV-2 infection among 25,739 patients who tested for SARS-CoV-2. This study found that 338 of 1058 patients previously prescribed AAP or NSAIDs had a positive test for SARS-CoV-2. In addition, the study found no significant differences in mortality or serious clinical outcomes in patients receiving AAP or NSAIDs within 2 weeks prior to diagnosis of COVID-19. Our results suggest that the use of AAP or NSAIDs may be a safe option for symptom relief, even when COVID-19 is suspected. The effect of NSAIDs in patients with COVID-19 has been controversial in previous studies. Prada et al. have demonstrated that exposure to NSAIDs does not increase the risk of SARS-CoV-2 infection or the severity of the COVID-19 [16]. A prospective, multicenter cohort study in the United Kingdom based on the ISARIC Clinical Characterization Protocol [17] demonstrated that NSAIDs use was not associated with worse in-hospital mortality (matched OR 0.95, $95\%$ CI 0.84–1.07; $$p \leq 0.35$$), critical care admission (1.01, 0.87–1.17; $$p \leq 0.89$$), requirement for invasive ventilation (0.96, 0.80–1.17; $$p \leq 0.69$$), or oxygen requirement (1.00, 0.89–1.12; $$p \leq 0.97$$). In addition, in a recent systematic review, Zhao et al. [ 18] also demonstrated that prior use of NSAIDs was not associated with mechanical ventilation, but with a decrease in mortality (aOR), 0.68; $95\%$ confidence interval (CI), 0.52–0.89). Huh et al. revealed that NSAIDs were not related to the diagnosis of COVID-19 (adjusted OR—aOR, 1.04; $95\%$ confidence interval—CI, 0.97–1.12), but were associated with severe disease (aOR, 1.53; $95\%$ CI, 1.25–1.86) using the Korean HIRA database [19]. There have been several studies that revealed the efficacy of different types of NSAIDs. In a double-blinded randomized control study, 500 mg of naproxen every 12 h could improve cough and shortness of breath in COVID-19 patients [20]. In an in vitro study [21], compared to paracetamol or the COX-2 inhibitor celecoxib, naproxen has direct antiviral activity against SARS-CoV-2 replication and protects the lung epithelium from damage caused by the pandemic virus, combining antiviral and anti-inflammatory effects. Another study reported the effectiveness of an in vitro study according to the dose of indomethacin; the treatment with sustained-release formulation at a dose of 75 mg twice daily is expected to achieve a complete response within 3 days for the SARS-CoV-2 infection. [ 22] Moreover, Kiani et al. [ 23] investigated the effectiveness of ketotifen, naproxen, and indomethacin, alone or in combination, in reducing SARS-CoV-2 replication. They found that the combination of ketotifen with indomethacin or naproxen all increased in percentage the inhibition of SARS-CoV-2 replication, and no cytotoxic effects were observed. Although this study did not analyze whether different types of NSAIDs affect serious clinical outcomes, it was found that NSAIDs had a positive effect on COVID-19 infection when reviewing previous studies. Acetaminophen, compared to other over-the-counter drugs, is a safe and commonly recommended analgesic. Micallef et al. [ 24] demonstrated that symptomatic treatment with NSAIDs for uncomplicated symptoms (fever, pain, or myalgia) deriving from COVID-19 is not recommended due to an increased risk of severe bacterial complications, and treatment with AAP as a safer drug alternative is recommended. Although not discussed in this study, another study revealed that patients with acute liver injury usually have undetectable levels of AAP; thus, acute liver injury or failure should be considered in patients with COVID-19 when chronic AAP ingestion is reported and is very high [25]. This study using the Korean HIRA database demonstrated that NSAIDs compared to AAP could be an alternative option for the relief of COVID-19 symptoms. NSAIDs can lead to misdiagnosis by masking the fever and worsening the prognosis of COVID-19. It is also possible that ibuprofen may upregulate angiotensin-converting enzyme-2 (ACE-2) expression and allow the SARS-CoV-2 virus to enter easily through epithelial cells. This study has several advantages. Above all, the data source of the HIRA database consisted of a very large sample data set, and the effects of confounding factors associated with NSAIDs were ruled out using PSM. Furthermore, this was the first study to evaluate the effectiveness of AAP and NSAIDs in Asian COVID-19 patients using PSM and a unique analysis. There were several limitations in this study. First, patients were included according to prescription medications; therefore, the use of a drug listed on electronic health records may not demonstrate exhaustive exposure to drugs. Second, this was a retrospective study, and, despite efforts to adjust for all confounders by PSM, additional unmeasured confounding factors might have influenced the outcomes. Third, in this study, the total amount of AAP or NSAIDs and different types of NSAIDs were not considered. Despite these limitations, this study revealed evidence based on cohort data of the safety of AAP or NSAIDs prior to the diagnosis of COVID-19. ## 5. Conclusions The use of AAP or NSAIDs prior to the diagnosis of COVID-19 was not associated with worse outcomes of COVID-19 in a nationwide Korean cohort study with a PSM. Therefore, AAP or NSAIDs can be safely prescribed to COVID-19 patients.
# Prevalence of Symptomatic Knee Osteoarthritis in Saudi Arabia and Associated Modifiable and Non-Modifiable Risk Factors: A Population-Based Cross-Sectional Study ## Abstract Objective: This study aimed to determine the prevalence of knee osteoarthritis (OA) in Saudi Arabia and the association between knee OA and modifiable and non-modifiable risk factors. Methods: A self-reported, population-based, cross-sectional survey between January 2021 and October 2021 was conducted. A large, population-representative sample ($$n = 2254$$) of adult subjects aged 18 years and over from all regions of Saudi Arabia was collected electronically using convenience sampling. The American College of Rheumatology (ACR) clinical criteria were used to diagnose OA of the knee. The knee injury and osteoarthritis outcome score (KOOS) was used to investigate the severity of knee OA. This study focused on modifiable risk factors (body mass index, education, employment status, marital status, smoking status, type of work, previous history of knee injury, and physical activity level) and non-modifiable risk factors (age, gender, family history of OA, and presence of flatfoot). Results: The overall prevalence of knee OA was $18.9\%$ ($$n = 425$$), and women suffered more compared to their male counterparts ($20.3\%$ vs. $13.1\%$, $$p \leq 0.001$$). The logistic regression analysis model showed age (OR: 1.06 [$95\%$ CI: 1.05–1.07]; $p \leq 0.01$), sex (OR: 2.14 [$95\%$ CI: 1.48–3.11]; $p \leq 0.01$), previous injury (OR: 3.95 [$95\%$ CI: 2.81–5.56]; $p \leq 0.01$), and obesity (OR: 1.07 [$95\%$ CI: 1.04–1.09]; $p \leq 0.01$) to be associated with knee OA. Conclusions: A high prevalence of knee OA underlines the need for health promotion and prevention programmes that focus on modifiable risk factors to decrease the burden of the problem and the cost of treatment in Saudi Arabia. ## 1. Introduction Osteoarthritis (OA) is the most common type of arthritis. It is a complex disorder that can affect the articular cartilage, bones, ligaments, and synovium. It contributes to degenerative and reparative processes and inflammation of the joint [1]. OA may affect different body joints, both proximal and distal (large, medium, and small joints), and most commonly occurs in the knee joint [2]. Several risk factors can increase the likelihood of having knee OA, and these can be divided into non-modifiable and modifiable factors [3]. There are six main well-known categories of modifiable risk factors: obesity and overweight, comorbidities (diabetic, depression, and cardiovascular disease), occupational factors, physical activity, biomechanical factors, and dietary exposure. Treatment should target the modifiable risk factors, as it is possible to reduce pain disability [3]. According to the International Classification of Functioning, Disability, and Health (ICF) framework, knee OA leads to activity limitation and participation restriction as well as impairment [4]. It is considered the primary cause of physical disability in the general population [5,6]. Physical disability resulting from pain and lack of functional capability decreases quality of life and raises the risk for more morbidity. Global statistics show that around 250 million people worldwide are affected by knee OA. In Saudi Arabia, it is one of the most common and growing health situations [7]. It is necessary to consider the prevalence of OA to understand the impact of the disease on society. Recent studies in Saudi Arabia have shown that knee OA increases with age, reaching up to $60.6\%$ in people aged 66–75 years compared with $30.8\%$ in those aged 46–55 years [8]. Other studies have shown that $39.75\%$ of the population, including $53.3\%$ of males and $60.9\%$ of females, suffers from knee OA [1,9]. Prior studies related to the prevalence of OA were conducted in Saudi Arabia with some limitations. Those studies introduced threats to internal validity, and the data collected were from local regions or cities with a small sample size that could not represent the general population of the kingdom [1,8,10]. Moreover, gender-based differences were not addressed, and diagnostic criteria for OA were mainly based on radiological findings. Interestingly, a previous study showed $50\%$ of subjects to be clinically asymptomatic with a radiographic finding and vice versa [11]. In Saudi Arabia, there is still insufficient data taken from a large sample size on knee OA and its risk factors. The clinical guidelines also do not recommend routine X-rays to diagnose OA [12]. Therefore, the current study uses clinical criteria to diagnose knee OA and includes participants in all regions of the kingdom so that there is an optimal number of participants to ensure a population-representative sample. Furthermore, the prevalence of OA in general has variations based on race and ethnicity [13]. Therefore, it is imperative to obtain an updated prevalence of knee OA and identify the modifiable risk factors for timely prevention and early intervention. The findings of the current study will also help health agencies and stakeholders to plan educational and preventive programmes to address the modifiable risk factors to ease the social-economic burden of OA. The present study aims to determine the prevalence of symptomatic knee OA in Saudi Arabia and examine the association of knee OA with modifiable and non-modifiable risk factors. The secondary objective is to compare affected with non-affected individuals with knee OA using knee injury and osteoarthritis outcome score (KOOS). ## 2.1. Study Design and Participants This population-based cross-sectional study was conducted between January and October 2021 among the *Saudi* general population. The study was approved by the research ethics committee of the University of Hail (Ethical approval no: H-2021-009). A convenience sampling method was used to collect data from all 13 regions of Saudi Arabia. Written informed consent was obtained from each participant before participation. Adult individuals aged 18 years and above were included. Individuals who had severe mental disorders and physical disabilities or deformities in the lower limbs were excluded. Individuals with severe mental disorders were excluded due to their inability to give informed consent and the desired information. Individuals with physical disabilities or deformities were also excluded due to the potential for their existing disability or deformity to affect pain, stiffness, and loss of function. The required sample size was calculated based on the previously published equation N = Z2P(1 − P)/d2, with a confidence interval of $95\%$ [14]. Z (confidence level) value of $95\%$ was selected since this is the most commonly used [14]. P (prevalence) was considered to be 0.22 ($22\%$) based on two previous studies, one of which showed $16\%$ of knee OA prevalence on a global scale [15], and the second investigated the prevalence of OA among Gulf Cooperation Council countries (average of studies for knee OA equal to $27\%$) [16]. This led us to take the average of the two studies ($21.5\%$), which was rounded to $22\%$. The d (precision) was considered to be 0.018, being more conservative [14], and this led to 2035 participants, and around $11\%$ (219 participants) were added to avoid any missing data. ## 2.2. American College of Rheumatology (ACR) Knee OA Assessment Criteria OA is a pathological condition affecting the structures of the entire joint, such as cartilage degeneration, bone remodelling, osteophyte production and synovial inflammation that cause pain, stiffness, oedema and loss of function [17]. To diagnose knee OA, ACR clinical criteria were used. The criteria defined knee OA as pain felt in the greatest number of days over the previous 30 days accompanied by three of the following: [1] age of 51 years and above, [2] bony enlargement, [3] 30 min of joint stiffness, [4] bony tenderness and [5] crepitus [18]. ## 2.3. Implementation of the Assessment Criteria To address the current study’s aim and apply the ACR clinical assessment criteria, a self-reported survey was conducted, and a closed-ended questionnaire was designed. The questionnaire consisted of three sections. The first section contained demographic characteristics, lifestyle, and health-related issues. Collected information included gender (male or female), age (years), weight (kg), height (m), education (illiterate, primary, intermediate, high school, diploma, bachelor’s degree, or higher degrees), work type (office work, fieldwork, both office and fieldwork, retired, housewife, or unemployed), marital status (single or married), smoking habits (yes or no), previous knee injuries (yes or no), presence of flat feet (yes, no, or I do not know), family history of OA (yes, no, or I do not know), and physical activity level (inactive, low intensity, moderate intensity, or high intensity). In the second section, questions related to ACR clinical criteria were asked [18]. The questions were as follows: [1] “Have you felt pain in one or both knees in most of the previous 30 days?” [ 2] “In which knee do you have pain (right, left, both, or no pain)?” [ 3] “How long have you had the pain?” [ 4] “Do you feel pain when pressing or compressing your knee/knees?” [ 5] “Do you think your knee/knee bones is/are larger than normal (enlarged)?” [ 6] “Does/do your knee/knees produce the sound of clicking or crepitus?” [ 7] “Do you think your knee/knees feel stiff for the first 30 min in the morning?” The third section included the KOOS scale. All questions were addressed in Arabic since KOOS has been shown to be valid and reliable in the Arabic language [19]. ## 2.4. KOOS Scale KOOS was used to collect and investigate the severity of the knee OA and to compare, according to the American College of Rheumatology (ACR) clinical criteria, individuals with knee OA with non-OA individuals and individuals with knee pain without knee OA diagnosis. The psychometric properties of the KOOS scale have been assessed, and it has been found to be a reliable and valid instrument for assessing knee and associated problems [20]. The subscales of KOOS are pain (5 items), symptoms (4 items), ADL (9 items), sport/recreation (3 items), and QOL (2 items). The total KOOS is based on the individual score calculated for each subscale. Each item is scored ranging from 0 to 4 (0 = none, 1 = mild, 2 = moderate, 3 = severe, and 4 = extreme). The maximum score is 100, indicating no problem, while 0 indicates extreme problems. An Excel spreadsheet downloaded from the official website (http://www.koos.nu/index.html, accessed on 1 December 2020) was used to calculate KOOS. ## 2.5. Scoring to Diagnose Individuals with Knee OA The first step was to identify individuals who had suffered knee pain in the majority of the previous 30 days and answered “yes” to the question “Have you felt pain in one or both knees in most of the previous 30 days?”, for which they were given a score of 1. In the second step, a score of 1 was given for the presence of any of the following symptoms: crepitus (“Do you feel pain when pressing or compressing your knee/knees?”), bony enlargement (“Do you think your knee/knee bones is/are larger than normal?”), bony tenderness (“Do you feel pain when pressing or compressing your knee/knees?”), the presence of 30 min of morning joint stiffness (“Do you think your knee/knees feel stiff for the first 30 min in the morning”), and age above 50 years (“How old are you?”). If the total score reached 3 or above, the ACR was fulfilled, and the participant was diagnosed as having clinical knee OA. If the question in the first step concerning the presence of knee pain in the majority of the previous 30 days had been answered as no or yes, and the total score was less than 3, the participant was diagnosed as healthy (Figure 1). ## 2.6. Data Collection The electronic data collection was executed via an online Google form. The link to the form was shared with potential participants through their WhatsApp, Twitter, and email accounts. The link was republished more than once to increase the response rate. The link was shared with individuals in all regions of Saudi Arabia (Makkah Region, Riyadh Region, Eastern Region, Asir region, Jazan Region, Medina Region, Al-Qassim Region, Tabuk Region, Ha’il Region, Najran Region, Al-Jawf Region, Al-Bahah Region, and Northern Borders Region). ## 2.7. Statistical Analysis The collected data were extracted from the Google form into a Microsoft Excel (version 16.33) spreadsheet and then exported to SPSS version 25 (SPSS Inc. Chicago, IL, USA). Body mass index (BMI) was divided into four categories based on the WHO classification (underweight < 18.5, normal = 18.5–24.9, overweight = 25–29.9, and obese > 29.9) [21]. Individuals were grouped by age into three categories (18–30, 31–49, and ≥50) to enable comparison. Descriptive analysis was performed for categorical data and presented as frequencies and percentages. The prevalence of knee OA was compared between the different demographics, lifestyles, and health-related characteristics using the chi-square test. KOOS subscales and total KOOS were compared for individuals with no incidence of knee OA to individuals with knee OA, and a comparison was conducted between individuals with knee pain and no OA and individuals with knee OA using an independent sample t-test after checking the for normality. Cohen’s d was calculated to show the effect size and interpreted as large (≥0.8), medium (0.5–0.79), and small (0.2–0.49). Forward binominal logistic regression was used to investigate the risk factors related to knee OA that were significant when the chi-square test was applied. Age and BMI were entered into the model as continuous variables. A p-value less than 0.05 was set as a statistically significant level. ## 3. Results A total of 2254 individuals from the 13 regions of Saudi Arabia responded to the questionnaire. The age of respondents ranged from 18 to 80 years (mean 35 ± 13.11). Most of the respondents ($80.88\%$) were females, and $44.23\%$ were aged 18–30 years; $60.74\%$ were married, and $35.58\%$ were of a normal body weight. Most of the participants ($60.03\%$) had completed a bachelor’s level of education, $29.41\%$ were office workers, and $27.06\%$ were unemployed (Table 1). The majority of respondents ($89.66\%$) reported no previous injury to the knee (ACL, meniscus). Family history of knee OA was reported in $63.75\%$ of the participants, and $6.83\%$ reported having flat feet (Table 2). A total of 1262 ($55.99\%$) participants reported having knee pain. Approximately $21.21\%$ had had knee pain for 1–5 years, while $3.06\%$ reported having it for longer than 15 years; $2.62\%$ had been absent from work for more than 15 days in the previous 12 months due to knee pain. The prevalence of knee OA based on ACR clinical criteria was $18.86\%$ ($$n = 425$$) (Table 2). The prevalence of knee OA significantly differed between gender, age group, marital status, BMI category, previous knee injury, family history of OA, presence of flat feet, educational level, smoking habits, and physical activity level (Table 3). The prevalence of knee OA increased with age; $6.82\%$ of participants aged 18–30 years were affected by knee OA, whereas $45.77\%$ of the participants 50 years and above were affected. The prevalence of knee OA was significantly higher among females than among their male counterparts ($20.25\%$ vs. $12.99\%$, $p \leq 0.01$). The prevalence of knee OA was higher in married ($25.57\%$, $p \leq 0.01$) and obese ($34.30\%$) individuals. Conversely, the prevalence of knee OA was not common in smokers compared with non-smokers ($19.42\%$ vs. $11.11\%$, $$p \leq 0.012$$). Moreover, the prevalence was highest in illiterate individuals ($84.62\%$), and the least was observed in individuals with master’s/Ph.D. degrees ($14.74\%$, $p \leq 0.01$). The prevalence of knee OA was $49.57\%$ ($p \leq 0.01$) among those with previous injuries to the knee and $31.82\%$ ($p \leq 0.01$) and $22.30\%$ ($p \leq 0.01$) in respondents who reported having flat feet and family history of OA, respectively. The logistic regression model was statistically significant ($p \leq 0.01$). The model showed age, gender, previous injury, level of physical activity, education level, smoking, family history of OA, and BMI to be associated with increased or decreased prevalence of knee OA. Females had more than twofold the risk for developing knee OA than males (OR: 2.14 [$95\%$ CI: 1.48–3.11]; $p \leq 0.01$). Ageing was also associated with an increase in the risk for knee OA (OR: 1.06 [$95\%$ CI:1.05–1.07]; $p \leq 0.01$). Previous injury to the knee (ACL, meniscus) was also found to be a risk factor for knee OA (OR: 3.95 [$95\%$ CI: 2.81–5.56]; $p \leq 0.01$). Smoking was found to decrease the risk for knee OA (OR: 0.51 [$95\%$ CI: 0.27–0.96]; $p \leq 0.01$). Individuals with higher degrees were found to have less risk for knee OA in comparison to those who were illiterate. Higher BMI was found to be associated with an increase in the risk for knee OA (OR: 1.07 [$95\%$ CI: 1.04–1.09]; $p \leq 0.01$). Interestingly, individuals with moderate levels of physical activity were found have decreased risk for knee OA compared with inactive individuals, while those with a high level of physical activity showed an increased level of risk compared with inactive individuals. Individuals with a family history of OA showed a higher risk for developing knee OA than individuals with no family history of OA (Table 4). Interestingly, when comparing non-OA participants with those with knee OA using KOOS, all subscales showed significant differences with p-values less than 0.01 and a large effect size. On comparing non-OA individuals with knee pain and individuals with OA, there was a significant difference in all KOOS subscales and total KOOS scores with a large effect size (Table 5). ## 4. Discussion Knee OA is a common, progressive, and degenerative disease that affects the daily lives of many people across the globe. Pain, stiffness, and limited mobility associated with the condition have negative influences on people’s quality of life [22]. Therefore, the overarching aim of this study is to determine the prevalence of symptomatic knee OA and the associated risk factors in Saudi Arabia. Our study reported a high prevalence of knee OA ($18.86\%$, $$n = 425$$). A recent systematic review and meta-analysis summarizing 88 previous studies showed similar findings [16], highlighting that the pooled global prevalence of knee OA for those aged 15 and above was $16\%$, ranging from $14.3\%$ to $17.8\%$ [15], highlighting that the pooled global prevalence of knee OA for those aged 15 and above was $16\%$, ranging from $14.3\%$ to $17.8\%$. Our study also showed that the prevalence increases with shifting the minimum age. A previous study in Al-Qaseem city reported the prevalence of knee OA using the same diagnostic criteria as were used in our study and showed a prevalence of $13\%$ [8], which was $4.9\%$ lower than the findings in this study. Importantly, the current study showed a statistically significant association between OA and age, gender, BMI, previous knee injury, level of education, level of physical activity, family history of OA, and smoking. Previous studies have also shown age [23], female sex [24], obesity [25], genetic factors [26], and previous injury [27] to be linked with the increased risk for knee OA. Interestingly, the current study found smoking to be protective and that it can reduce the risk for OA. A previous systematic review was also aligned with the current study’s findings [28]. Moreover, higher education levels were protective and reduced the risk for developing knee OA, which is also in agreement with our study [29]. On comparing the risk factors in the current study, the results showed that previous history of knee injury ranked as the highest risk by a 3.95 odds ratio. Our evidence suggests that age increases in the risk for osteoarthritis. The age of the subjects in our study ranged from 18 to 80 years, and the occurrence of knee OA increased up to $45.77\%$ among persons aged over 50 years compared with $6.82\%$ in those aged between 18 and 30 years. This can potentially be attributed to ageing cellular and physiological change that is associated with decreased muscle strength and mass, poor proprioception, and cartilage thinning [30]. It has been reported by the WHO Scientific Group on Rheumatic Diseases that an estimated $10\%$ of the world’s population aged 60 years and older have significant clinical problems that can be attributed to OA [31]. Our study found a high prevalence of knee OA in females and obese individuals. Previous studies have also shown an increasing trend for knee OA in females, which is likely to be due to hormonal factors and anatomical and kinematic differences in females [32]. Obesity is responsible for additional overloading of the weight-bearing joints, which further contributes to triggering the wear-and-tear process inside joints among individuals with high BMI [33]. A recent study in Saudi Arabia showed similar results regarding the increased risk for developing knee OA in individuals with higher BMI [34]. A high prevalence of obesity has been reported in Saudi females ($33.5\%$) compared with males ($24.1\%$) [35]. Based on these statistics, Saudi females with obesity may be more likely to have additional risk for knee OA development. Moreover, advancing age in Saudi females with obesity may further pose a risk for knee OA. Hence, modifiable risk factors should be targeted to reduce the risk for developing knee OA in the future and lower the burden of OA in the Saudi community. A previous study showed that weight loss resulted in a reduction in the risk for developing OA and symptoms in individuals affected by OA [36]. A weight-loss programme can be used as a proactive strategy to prevent and manage many non-communicable diseases, including OA, in Saudi Arabia. Several factors can explain the reduced risk for individuals with a higher education level and smokers developing OA. Smoking may promote the proliferation of chondrocytes, improve the expression of cartilage-specific type 2 collagen, and have anti-inflammatory effects [28]. A previous study could not conclude a reason for education level reducing the risk for developing OA [29]. However, higher education could mean more knowledge about disease prevention, while those with lower education most likely work in office jobs, which may require standing for long periods and bending. These claims are just speculatory in nature and need to be proved. Notably, the prevalence of knee OA in individuals with flatfoot was significantly higher ($31.82\%$) than in those with normal feet ($17.03\%$), supporting reported evidence that bilateral flat feet are significantly associated with worse OA-related knee pain and disability [37]. This may be explained by changing the load distribution on the knee when the foot is flat. A previous study showing knee pain and knee OA cartilage damage to be associated with flatfoot supports our earlier claim [38]. Despite the current study using clinical criteria to diagnose individuals with knee OA, the KOOS subscales showed that there was a significant difference between those affected with knee OA and those who were healthy and those who were healthy with knee pain, which strengthens the study. The current study showed that individuals with knee OA had 58.24 ± 18.32, 55.29 ± 16.28, 58.74 ± 20.15, 38.61 ± 25.46, and 46.75 ± 22.05 scores for pain, symptoms, ADL, sport, and QOL, respectively. A previous study that investigated the reliability of KOOS in the Saudi population with knee OA showed similar results for pain (45.6 ± 18.6), symptoms (52.9 ± 21.3), ADL (47.4 ± 20.1), sport (17.7 ± 18.9), and QOL (31.3 ± 16.8) subscales [39]. These findings are in line with a study that showed poor KOOS outcomes with knee OA after a joint injury compared to uninjured controls [40]. Interestingly, the association between the prevalence of knee OA and level of physical activity showed that moderate levels reduced the risk for knee OA compared to sedentary lifestyle, whereas high levels led to increased risk for knee OA. According to the Physical Activity Guidelines for Americans, moderate levels of physical activity (150 min/week of moderate-intensity exercise in bouts lasting ≥10 min) and lower levels of physical activity (at least 45 total minutes/week of moderate-intensity exercise) were associated with improved function and gait speed in OA patients. The reverse impact of high levels of physical activity on knee OA may be explained by the nature of the physical activity, where high-impact activity and a large number of weight-bearing exercises may lead to joint destruction [41]. ## Strengths and Limitations This study has certain strengths, one of which is that it is the first population-representative study from Saudi Arabia with a large sample size. This study also informs the fundamental knowledge and highlights the prevalence of knee OA in the Saudi population. Another strength is its use of valid clinical criteria for detecting knee OA. Nevertheless, measuring the association between both modifiable and non-modifiable risk factors would have further strengthened this study. Using an online form to collect the data has the advantages of reducing cost and being able to reach remote areas, although it raises concerns about the accuracy and reliability of the data. The modifiable risk factors associated with knee OA will further guide the design of effective interventions to reduce the burden of disease in the community. However, some limitations should be acknowledged, such as the current study’s use of a cross-sectional design, which has a major limitation in reporting causal explanations. The current study also used a convenience sampling technique, which may reduce the representativeness of the sample in the general population. Furthermore, due to a lack of logistic support, only a self-reported method was used, which may have generated reporting bias in the study, especially since an exaggerated difference in prevalence between sexes has been reported in the literature, as females may be more likely to report OA [42]. Future research with clinical-based diagnoses by specialized health providers is warranted to attain robust findings. Likewise, future studies should explore other risk variables that may increase the risk for developing knee OA, such as the type of shoes worn or knee adduction moment during activity. A previous study showed that higher knee adduction movement led to progression in the knee OA by increasing the load on the medial side of the knee [43] by increasing the load on the medial side of the knee. Using footwear such as lateral wedge insoles can reduce knee adduction movement [44]. ## 5. Conclusions The study reveals a high prevalence of knee OA among the Saudi population. It contributes to a better understanding of the modifiable and non-modifiable risk factors associated with symptomatic knee OA. The study identifies associated non-modifiable risk factors (age, gender, and family history of OA) and modifiable risk factors (BMI, previous knee injury, smoking, physical activity level, and level of education) with knee OA. The information from this study is helpful for identifying people at risk for developing knee OA and targeting them by designing prevention plans such as weight-loss strategies and improving their physical activity levels. The findings of the current study can also help clinicians, policymakers, and stakeholders to target the associated modifiable risk factors explored in this study to decrease the burden and treatment cost of knee OA. The design of the current study (cross-sectional and using an online survey) may limit its generalisability, and therefore, longitudinal studies are needed.
# Multi-Parametric Cardiac Magnetic Resonance for Prediction of Heart Failure Death in Thalassemia Major ## Abstract We assessed the prognostic value of multiparametric cardiovascular magnetic resonance (CMR) in predicting death from heart failure (HF) in thalassemia major (TM). We considered 1398 white TM patients (30.8 ± 8.9 years, 725 women) without a history of HF at baseline CMR, which was performed within the Myocardial Iron Overload in Thalassemia (MIOT) network. Iron overload was quantified by using the T2* technique, and biventricular function was determined with cine images. Late gadolinium enhancement (LGE) images were acquired to detect replacement myocardial fibrosis. During a mean follow-up of 4.83 ± 2.05 years, $49.1\%$ of the patients changed the chelation regimen at least once; these patients were more likely to have significant myocardial iron overload (MIO) than patients who maintained the same regimen. Twelve ($1.0\%$) patients died from HF. Significant MIO, ventricular dysfunction, ventricular dilation, and replacement myocardial fibrosis were identified as significant univariate prognosticators. Based on the presence of the four CMR predictors of HF death, patients were divided into three subgroups. Patients having all four markers had a significantly higher risk of dying for HF than patients without markers (hazard ratio (HR) = 89.93; $95\%$CI = 5.62–1439.46; $$p \leq 0.001$$) or with one to three CMR markers (HR = 12.69; $95\%$CI = 1.60–100.36; $$p \leq 0.016$$). Our findings promote the exploitation of the multiparametric potential of CMR, including LGE, for better risk stratification for TM patients. ## 1. Introduction Beta thalassemia major (β-TM) is a genetic blood disease with a high incidence in the Mediterranean basin, the Middle East, the Indian subcontinent, Central Asia, and the Far East [1]. However, due to the increased migration flux, thalassemia has become a global health problem. β-TM is characterized by a reduced or absent synthesis of the β chains of hemoglobin with a consequent excess of α chains which aggregate and precipitate in the red cells, leading to chronic hemolysis and the destruction of red cells and their precursors in the bone marrow or peripheral blood [2]. These abnormalities result in severe anemia, which needs lifelong regular blood transfusions. Due to the absence of a physiologic excretory pathway for excess iron, the major drawback of this treatment is iron overload, which, being highly cytotoxic, can cause organ dysfunction and damage [3]. Iron-induced heart failure (HF) remains the main cause of morbidity and mortality in TM patients, although the introduction of T2* Cardiovascular Magnetic Resonance (CMR) for the non-invasive assessment of myocardial iron overload (MIO) led to a significant increase in the survival rate [4,5]. Indeed, this technique offers the possibility to design tailor-made iron chelation therapies customized for each patient and to evaluate their efficacy [6,7,8,9]. In addition to direct myocardial injury, iron overload may affect the heart indirectly because hepatic dysfunction and endocrinopathies (diabetes mellitus, hypothyroidism, and hypoparathyroidism) arising from iron accumulation increase the risk for heart failure independently of cardiac iron status [10,11,12]. Nevertheless, the pathophysiology of heart failure in TM can be multifactorial with significant contributions from physiologic, immunoinflammatory, and genetic factors [13,14,15]. Thanks to its multiparametric potential, CMR represents a unique tool for the characterization and quantification of myocardial involvement and damage. CMR is the gold standard for the quantification of biventricular size and function by cine images. Because it does not incorporate ionizing radiation, does not exhibit window or geometric limitations, and provides precise ventricular endocardial definition, it allows for highly reproducible and accurate measurements of ventricular volumes. This is of particular value in TM, where the “normal” heart pumps at larger volumes and against lower peripheral resistances than the normal heart in non-thalassemic individuals, and where the heart’s biventricular size can be influenced by the iron previously accumulated. Moreover, late gadolinium enhancement (LGE) CMR is the only non-invasive imaging method that can detect replacement myocardial fibrosis, a common finding among TM patients [14,16,17]. A study of 481 Italian TM patients showed that heart iron, ventricular dysfunction, and replacement myocardial fibrosis could predict the future development of heart failure. Moreover, all three of these CMR markers remained independent prognosticators in a multivariate model that included a previous history of heart failure [11]. To the best of our knowledge, the association between CMR findings and HF death in TM patients has not yet been demonstrated. The aim of this multicenter study was to evaluate the prognostic value of multiparametric CMR (cardiac iron, function, and replacement fibrosis) in predicting death from heart failure in a large cohort of well-treated TM patients. ## 2.1. Study Population We considered 1485 TM patients (31.04 ± 8.88 years; 771 women) consecutively enrolled in the Myocardial Iron Overload in Thalassemia (MIOT) network, comprising 70 thalassemia centers and 10 magnetic resonance imaging (MRI) centers, where MRI exams were performed using homogeneous, standardized, and validated procedures [5,18]. The inclusion criteria of the MIOT project were: [1] male and female patients, of all ages, with thalassemia syndromes or structural hemoglobin variants, requiring MRI to quantify the cardiac and liver iron burden; [2] written informed consent; [3] written authorization for use and disclosure of protected health information; [4] no absolute contraindications to MRI. All patients were from an Italian background and were uniformly treated. They had been regularly transfused since early childhood and started undergoing chelation therapy from the mid-to-late 1970s, whereas patients born after the 1970s received chelation therapy from early childhood. All patients performed their first MRI scan between April 2006 and November 2015. All scans were performed in the week immediately prior to the scheduled blood transfusion. The clinical-anamnestic history of the patients, from birth to the date of the first MRI scan, was recorded in the MIOT web-based database. At every MRI follow-up, which was performed by protocol every 18 ± 3 months, the clinical, instrumental, and laboratory data were updated. All patients gave informed consent in compliance with the Declaration of Helsinki, and the study was approved by the institutional ethics committees of all MRI sites. ## 2.2. Magnetic Resonance Imaging All patients underwent MRI using the clinical 1.5 T scanners of three main vendors (GE Healthcare, Milwaukee, WI; Philips, Best, Netherlands; Siemens Healthineers, Erlangen, Germany) equipped with phased-array coils. Breath-holding in end-expiration and ECG-gating were used. For iron overload assessment, a validated T2* gradient–echo multiecho sequence was used. The intersite, interstudy, intraobserver, and interobserver variability of the proposed methodology had been previously assessed [19,20]. For the measurement of MIO, a multislice approach was adopted [21]. Three parallel short-axis views (basal, medium, and apical) of the left ventricle (LV) were acquired at 10 echo times (TE) (first TE = 2.0 ms, echo spacing = 2.26 ms) in a single end-expiratory breath-hold. Acquisition sequence details are provided in [22]. A medium hepatic slice was obtained at 10 TEs (echo spacing = 2.26 ms) in a single end-expiratory breath-hold [23]. T2* image analysis was performed by trained MRI operators (>10 years of experience) using a custom-written, previously validated software (HIPPO MIOT®) [24]. The software provided the T2* value for all the 16 segments of the LV, according to the standard American Heart Association (AHA)/American College of Cardiology (ACC) model [20]. The image analysis procedure included the manual delineation of the endocardial and epicardial borders of the LV wall, the identification of the upper intersection of the left and the right wall, and the automatic fitting of the signal decay over the TEs with an appropriate decay model. Susceptibility and geometric artifacts were corrected using an appropriate correction map [24]. The global heart T2* value was obtained by averaging all segmental values. Hepatic T2* values were calculated in a circular region of interest, defined in a homogeneous area of parenchyma without blood vessels [23], and were converted into liver iron concentration (LIC) with an appropriate calibration curve [25]. Steady-state free precession (SSFP) cines were acquired in sequential 8-mm short axis slices (gap 0 mm) from the atrio-ventricular ring to the apex to assess biventricular function parameters quantitatively in a standard way [26]. Thirty cardiac phases were acquired per heartbeat, and 10–14 slices were required to cover the heart over its entire extension. The most apical slice included was the first slice which showed no blood pool at end-diastole. The most basal slice included was the one that showed a remaining part of the thick myocardium and was below the aortic valve. The analysis was based on the manual recognition of the endocardial and epicardial borders of the wall, at least in the end-diastolic and end-systolic phases in each slice. Moreover, the papillary muscles were delineated and were considered myocardial mass rather than part of the blood pool. Biventricular volumes were indexed to the body surface area. The inter-center variability for the quantification of cardiac function was previously reported [27]. The left and right atrial areas were measured from the 4-chamber view projection in the ventricular end-systolic phase. Late gadolinium enhancement short-axis images were acquired 10–18 min after Gadobutrol (Gadovist®; Bayer Schering Pharma; Berlin, Germany) intravenous administration at the standard dose of 0.2 mmol/kg using a fast gradient-echo inversion recovery sequence. In addition, vertical, horizontal, and oblique long-axis views were acquired. Inversion times were adjusted to null the normal myocardium (from 210 ms to 300 ms). LGE was evaluated visually by two independent observers using a two-point scale (enhancement absent or present) and was considered present when visualized in two different views [14]. LGE images were not acquired in patients with a glomerular filtration rate < 30 mL/min/1.73 m2 and in patients who refused the contrast medium administration. ## 2.3. Diagnostic Criteria and Follow-Up A T2* measurement of 20 ms was taken as a “conservative” normal value for the segmental and global T2* values [28]. A LIC < 3 mg/g dry weight (dw) indicated no significant hepatic iron overload [29]. The mean serum ferritin level in the year preceding the MRI was taken into account, and a value ≥ 1000 ng/mL was considered indicative of significant body iron burden [30]. Previously derived reference ranges for biventricular volumes and function, specific to TM patients, were used [26]. Ventricular dilation was diagnosed in the presence of an LV and/or right ventricular (RV) end-diastolic volume index (EDVI) >2 standard deviations (SD) from the mean values normalized to age and gender. Ventricular dysfunction was diagnosed in the presence of an LV and/or RV ejection fraction (EF) <1 SD from the mean values normalized to age and gender. Atrial dilatation was diagnosed if the left and/or right atrial area indexed by body surface area was ≥15 cm2/m2 [31]. The endpoint used in this study was HF-mortality. HF was identified based on symptoms (breathlessness, ankle swelling, and fatigue), signs, biomarkers, and instrumental parameters, according to the current guidelines [32]. The follow-up date coincided with the date of the last available MRI. For patients who did not perform a follow-up MRI, a case report form detailing patient outcomes between the baseline MRI and September 2018 was completed by the caring hematologist. ## 2.4. Statistical Analysis All data were analyzed using SPSS version 27.0 (IBM Corp, Armonk, NY, USA) statistical package. Continuous variables were described as mean ± SD. Categorical variables were expressed as frequencies and percentages. The normality of the distribution of the continuous variables was assessed by using the Kolmogorov–Smirnov test. For continuous values with normal distributions, comparisons between groups were made by performing the independent-samples t-test (2 groups) or a one-way analysis of variance (ANOVA) (>2 groups). Wilcoxon’s signed rank test or the Kruskal–Wallis test were applied for continuous values with non-normal distribution. χ2 testing was performed for non-continuous variables. The Bonferroni post hoc test was used for multiple comparisons between pairs of groups. Correlation analysis was performed using Pearson’s test or Spearman’s test where appropriate. The Cox proportional hazard model was used to test the association between the considered prognostic variables and the outcome (HF death). The results are presented as hazard ratios (HR) with $95\%$ confidence intervals (CI). Kaplan–Meier curves were generated by relating the development of an outcome over time to each significant prognosticator. The log rank test was used to compare different strata in Kaplan–Meier analyses. In all tests, a 2-tailed $p \leq 0.05$ was considered statistically significant. ## 3.1. Selection of the Patients At the baseline MRI, eighty-seven ($5.9\%$) patients had a history of heart failure and were excluded from this study. Compared to HF-free patients, patients with a history of HF were characterized at the baseline MRI by a significantly higher age (34.33 ± 7.69 years vs. 30.84 ± 8.91 years; $$p \leq 0.001$$), significantly lower global heart T2* values (23.82 ± 13.38 ms vs. 29.46 ± 12.03 ms; $p \leq 0.0001$), a significantly higher number of segments with a T2* < 20 ms (7.33 ± 7.04 vs. 4.47 ± 6.09; $p \leq 0.0001$), significantly higher LV EDVI (94.37 ± 23.97 mL/m2 vs. 86.60 ± 18.71 mL/m2; $$p \leq 0.005$$) and RV EDVI (93.15 ± 37.33 mL/m2 vs. 82.59 ± 18.73 mL/m2; $$p \leq 0.011$$), and significantly lower LV EF (57.72 ± $10.41\%$ vs. 61.58 ± $7.09\%$; $p \leq 0.0001$) and RV EF (56.04 ± $10.17\%$ vs. 61.37 ± $8.14\%$; $p \leq 0.0001$). ## 3.2. Patients’ Characteristics Table 1 shows the demographic, clinical, and MRI features of the considered 1398 TM patients at the baseline MRI. The mean age was 30.8 ± 8.9 years, and 725 ($51.9\%$) patients were women. Bi-atrial areas were present for 1138 patients due to technical reasons. The contrast medium was not administrated in 286 ($20.5\%$) patients. Among the 187 ($16.8\%$) patients with replacement myocardial fibrosis, none had an ischemic pattern, and two or more foci were detected in $59.9\%$ of cases. The septum was involved in $80.6\%$ of the cases. Patients with replacement myocardial fibrosis were significantly older than patients without replacement myocardial fibrosis (33.25 ± 7.79 years vs. 30.89 ± 8.48 years; $p \leq 0.0001$), but they showed comparable global heart T2* values (27.56 ± 12.62 ms vs. 29.06 ± 11.94 ms; $$p \leq 0.124$$). At baseline, serum ferritin levels showed a significant positive correlation with MRI LIC values ($R = 0.713$; $p \leq 0.0001$) and a significant inverse correlation with global heart T2* values (R = −0.326; $p \leq 0.0001$). A significant inverse correlation was detected between global heart T2* and MRI LIC values (R = −0.303; $p \leq 0.0001$). Global heart T2* values were not correlated with biventricular volume indexes or LV cardiac indexes but showed a weak positive association with both LV EF ($R = 0.182$; $p \leq 0.0001$) and RV EF ($R = 0.102$; $$p \leq 0.005$$). The mean follow-up time was 4.83 ± 2.05 years (median: 5.01 years). ## 3.3. Chelation Therapy At the baseline MRI, patients received the following chelation regimens: deferoxamine ($33.3\%$), deferiprone ($19.0\%$), deferasirox ($25.0\%$), combined deferoxamine + deferiprone ($17.2\%$), sequential deferoxamine/deferiprone ($5.0\%$), and others ($0.5\%$). During the follow-up, $49.1\%$ of the patients changed their chelation regimen at least once, i.e., they switched to a different type of chelator or underwent modification of dose and/or frequency. Compared to patients who maintained the same regimen, those who changed the chelation regimen were more likely to have a baseline global heart T2* value < 20 ms ($33.2\%$ vs. $19.7\%$; $p \leq 0.0001$) and to have a baseline LIC ≥ 3 mg/g/dw ($69.3\%$ vs. $57.4\%$; $p \leq 0.0001$). The percentage of patients with good compliance (correspondence between the time history of drug administration and the prescribed regimen > $60\%$) was significantly higher at the end of the study than at the baseline MRI ($94.6\%$ vs. $92.5\%$; $p \leq 0.0001$). ## 3.4. Patient Outcomes Twelve ($1.0\%$) patients died from heart failure. Ten patients had HF with reduced EF at echocardiography. The majority of them presented to the healthcare provider with a reduction in their effort tolerance due to dyspnea and/or fatigue. One patient presented not only with fatigue but also with chest pain and tachycardia and had elevated troponin levels. Two patients presented with palpitations. Two patients had chronic heart failures diagnosed >1 year after the CMR scan, that rapidly worsened. One patient had an HF with mildly reduced EF. One patient had HF with preserved EF and had evidence of structural heart disease. The mean age at death was 35.06 ± 8.68 years (range: 17–47 years). Mean time from the baseline MRI to the HF-related death was 1.68 ± 1.78 years and, six ($50.0\%$) deaths occurred within the first year of follow-up. When compared to the other patients, patients who died by HF showed at the baseline MRI significantly higher serum ferritin levels and MRI LIC values, significantly lower global heart T2* values, a significantly higher numbers of segments with T2* < 20 ms, significantly lower biventricular EFs, and a significantly higher incidence of replacement myocardial fibrosis (Table 1). One patient who died from HF had a previous history of myocarditis. ## 3.5. Prediction of Heart Failure Mortality Table 2 shows the results of the univariate Cox regression analysis. No association was detected between age or gender and HF mortality. Significant MIO (global heart T2* < 20 ms), ventricular dysfunction, ventricular dilation, and replacement myocardial fibrosis were identified as significant univariate prognosticators. Figure 1 shows the Kaplan–Meier survival curves. The log-rank test revealed a significant difference in the curves for each prognosticator (significant MIO: $$p \leq 0.010$$, ventricular dysfunction: $$p \leq 0.030$$, ventricular dilation: $p \leq 0.0001$, and replacement myocardial fibrosis: $$p \leq 0.010$$). Due to the low number of deaths for HF, it was not possible to perform a multivariate model. However, based on the presence of the four CMR prognosticators of HF death, patients were divided into three subgroups:[1]patients with none of the four CMR markers (group 0; $$n = 488$$);[2]patients with one to three CMR markers (group 1; $$n = 617$$);[3]patients with four CMR markers (group 2; $$n = 7$$). Table 3 shows the comparison of the baseline data among the three groups. No difference in terms of age, age at the start of regular transfusions or chelation was detected. All patients with four CMR markers were male, whereas distribution by sex was homogeneous in the other two groups. Serum ferritin levels and MRI LIC values were significantly higher in group 1 than in group 0 ($p \leq 0.0001$ in both comparisons). Global heart T2* values were significantly lower in group 2 than in groups 0 and 1 ($p \leq 0.0001$ and $$p \leq 0.006$$, respectively) and in group 1 than in group 0 ($p \leq 0.0001$), whereas the number of segments with a T2* < 20 ms was significantly higher in group 2 than in groups 0 and 1 ($p \leq 0.0001$ and $$p \leq 0.018$$, respectively) and in group 1 than in group 0 ($p \leq 0.0001$). Significantly lower LV EF and RV EF values were found in group 2 compared to both group 1 ($p \leq 0.0001$ for both ventricles) and group 0 ($p \leq 0.0001$ for both ventricles) and in group 1 compared to group 0 ($p \leq 0.0001$ for both ventricles). LV EDVI and RVEDVI were significantly increased in group 2 compared to group 1 ($p \leq 0.0001$ for both ventricles) and to group 0 ($p \leq 0.0001$ for both ventricles) and in group 1 compared to group 0 ($p \leq 0.0001$ and $$p \leq 0.003$$, respectively). The frequency of replacement myocardial fibrosis was significantly higher in group 2 than in groups 1 and 0 and in group 1 than in group 0 ($p \leq 0.0001$ for all comparisons). The frequency of HF death was significantly higher in group 2 than in both group 0 ($14.3\%$ vs. $0.2\%$; $p \leq 0.0001$) and group 1 ($14.3\%$ vs. $1.5\%$; $$p \leq 0.021$$) (Figure 2). Patients having all four markers had a significantly higher risk of dying by HF than patients without markers (HR = 89.93; $95\%$ CI = 5.62–1439.46; $$p \leq 0.001$$) or with one to three CMR markers (HR = 12.69; $95\%$ CI = 1.60–100.36; $$p \leq 0.016$$). Figure 3 shows the Kaplan–Meier survival curve. The log-rank test revealed a significant difference in the curves ($p \leq 0.0001$). ## 4. Discussion In our homogeneous white Italian/Mediterranean population, which had been well-treated since early childhood and followed for a mean of 4.8 years after the baseline MRI, we detected a low incidence of deaths from heart failure because the T2* report guided the patient-specific adjustment of the chelation regimen. Indeed, the patients who changed their chelation regimen (drug or frequency/dosage) during the follow-up were more likely to have significant MIO at baseline. Moreover, all MRI scans were performed after 2006, the year when a new era of chelation treatment had started thanks to the availability in the clinical arena of three different iron chelators and the evidence that they could be used in association to intensify chelation or make it more tolerable [33]. No prospective association was detected between hepatic iron or serum ferritin levels and HF mortality. These parameters cannot be used to infer cardiac iron status, as demonstrated by weak cross-sectional correlation with the cardiac T2* found in the present study and in other published studies [28,34,35,36]. The relationship between cardiac and hepatic iron is complex due to the differences in iron uptake and elimination between the two organs as well as the strong influence of both the type and pattern of chelation [6,9,37]. Cardiac T2* can identify preclinical cardiac iron deposition in patients with excellent control of total body iron stores [35,38,39,40]. As expected, MIO was a significant prognosticator of HF death. Excess iron can be detrimental to human cells through the production of hydroxyl radicals via Haber–Weiss–Fenton reactions, which cause oxidative damage to cellular components like lipids, proteins, and DNA [41,42]. Free iron can directly interact and interfere with a variety of ion channels of cardiomyocytes, including the L-type calcium channel, the ryanodine-sensitive calcium channel, voltage-gated sodium channel, and delayed rectifier potassium channel, making cardiomyocytes particularly vulnerable to iron overload. Excessive production of reactive oxygen species can also directly induce ferroptosis (a non-apoptotic mode of cell death) in cardiomyocytes by catalyzing the oxidation of phospholipids in the cell membrane [43]. Importantly, other CMR parameters, namely ventricular dilatation, ventricular dysfunction, and replacement myocardial fibrosis, also emerged as unfavorable prognosis determinants. Our findings are in line with the study by Pepe et al., where, in a multivariate model, replacement myocardial fibrosis, MIO, and ventricular dysfunction independently predicted non-fatal HF [11]. Initially, MIO may cause a reduction of ventricular dimensions through vascular and ventricular stiffening [44] but the ventricular systolic function can remain well preserved so that at the onset of the disease patients are generally asymptomatic. In end-stage disease, MIO may increase ventricular dimensions and decrease systolic function [45]. In the Italian TM population, replacement myocardial fibrosis was demonstrated to be a relatively common finding (~$20\%$) [14,46,47], correlated with aging, negative cardiac remodeling, hepatitis C virus (HCV) infection, and diabetes mellitus in adult TM patients [10,14], and with lower cardiac T2* values in pediatric patients free of complications [40]. Moreover, a recent study showed an association between replacement fibrosis and decreased native T1 values measured by CMR [26], suggesting a potential pathophysiological role of MIO in the development of myocardial fibrosis. Indeed, native T1 mapping seems to have a higher sensitivity for low amounts of iron in comparison to the T2* technique. Although iron could be removed via chelation treatment, the induced heart damage may be not totally reversible. The findings of the present study further highlight the prognostic implications of replacement myocardial fibrosis. In fact, in different pathologies, such as dilatative cardiomyopathy, hypertrophic cardiomyopathy, aortic stenosis, and infiltrative diseases, replacement myocardial fibrosis represents a final common pathway of myocardial disease and is independently associated with cardiac and all-cause mortality [48]. Importantly, when the four CMR indices (cardiac iron, dilatation, dysfunction, and replacement fibrosis) were evaluated in combination, they fine-tuned the prognostic stratification of TM patients. Thus, the results of our study strengthen the usefulness of a multiparametric CMR approach which integrates biventricular ejection fractions and volumes and LGE with cardiac T2* to further ameliorate the prognosis of TM patients via the early identification of high-risk patients. Conversely, relying only on cardiac T2* as a unique marker of cardiac death may lead to suboptimal prognostic stratification. It deserves mention that in our study, both ventricular dysfunction and dilation were diagnosed using previously derived “normal for TM” reference ranges in order to avoid a misdiagnosis of cardiomyopathy (underdiagnosis of dysfunction and overdiagnosis of dilatation) [26]. Indeed, despite transfusion therapy, TM represents a chronically anemic condition characterized by an elevation of blood volume (increased preload) and a decrease in systemic vascular resistance (decreased afterload) [49]. Both conditions enhance ventricular pump performance, and the anatomical–functional expression of this hemodynamic state is the enlargement of cardiac cavities [50]. ## Limitations This study suffers from several limitations. The small number of HF deaths that occurred during the follow-up did not allow us to perform a multivariate analysis that included all variables identified in the univariate analysis. For this reason, we performed a model with the four CMR univariate prognosticators. The prognostic value of the CMR mapping techniques (T1, T2, and extracellular volume) was not evaluated because they were not available at the time of patient enrolment. We did not measure myocardial deformation (strain), which could be a more sensitive marker of myocardial dysfunction than EF [51]. Although feature-tracking (FT) CMR allows quantification of myocardial deformation on routine SSFP cine images, the dedicated post-processing FT software packages were not available in the MIOT centers. More studies are needed to evaluate the transferability of our results to other TM populations with a lower prevalence of HCV infection, in which a lower frequency of myocardial fibrosis may be expected. ## 5. Conclusions In TM patients, significant MIO, ventricular dysfunction, ventricular dilation, and replacement myocardial fibrosis were associated with a significantly higher risk of heart failure death, and the combined use of all four CMR indexes provided incremental prognostic information. Hence, the present study’s findings promote exploiting the multiparametric potential of CMR, including LGE, for better risk stratification for TM patients. Further studies are needed to verify if, in addition to the adjustment of iron chelation therapy, the adoption of treatment directed to myocardial performance may further open the prognosis of TM patients.
# Real-World Analysis on the Characteristics, Therapeutic Paths and Economic Burden for Patients Treated for Glaucoma in Italy ## Abstract This real-world analysis was performed on administrative databases to evaluate characteristics, therapies, and related economic burden of glaucoma in Italy. Adults with at least 1 prescription for ophthalmic drops (ATC class S01E: antiglaucoma preparations, miotics) during data availability period (January 2010−June 2021) were screened, then patients with glaucoma were included. First date of ophthalmic drops prescription was the index date. Included patients had at least 12 months of data availability before index-date and afterwards. Overall, 18,161 glaucoma-treated patients were identified. The most frequent comorbidities were hypertension ($60.2\%$), dyslipidemia ($29.7\%$) and diabetes ($17\%$). During available period, $70\%$ ($$n = 12$$,754) had a second-line therapy and $57\%$ ($$n = 10$$,394) a third-line therapy, predominantly ophthalmic drugs. As first-line, besides $96.3\%$ patients with ophthalmic drops, a small proportion reported trabeculectomy ($3.5\%$) or trabeculoplasty ($0.4\%$). Adherence to ophthalmic drops was found in $58.3\%$ patients and therapy persistence reached $78.1\%$. Mean total annual cost per patient was €1,725, mostly due to all-cause drug expenditure (€800), all-cause hospitalizations (€567) and outpatient services (€359). In conclusion, glaucoma-treated patients were mostly in monotherapy ophthalmic medications, with an unsatisfying adherence and persistence (<$80\%$). Drug expenditures were the weightiest item among healthcare costs. These real-life data suggest that further efforts are needed to optimize glaucoma management. ## 1. Introduction Vision impairment represents an important public health issue, and its burden is likely to increase in the future because of ageing of the global population [1]. Glaucoma is a chronic optic neuropathy age-related and among the main causes of vision loss [2]. The characteristic progressive damage of the optic nerve leads to an irreversible, although preventable, visual field loss [3]. Generally, the symptoms are almost absent at early stages and arise at late stages with problems related to permanent visual loss [4]. To date, the only controllable factor to prevent or delay the progressive course of glaucoma is the elevated intraocular pressure (IOP), even though studies suggest other modifiable risk factor could be represented by socioeconomic status, dietary intake, poor exercise or sleeping apnea [5]. Last estimates indicate approximately 60 million individuals with glaucoma worldwide, and around 8 million for Europe, with a prevalence of $2.5\%$ [6]. In Italy around 550,000 individuals are estimated to have received a diagnosis for glaucoma [7]. The potential blindness, as well as the irreversible vision impairment, have a detrimental impact on the quality of life of glaucoma patients, that has been reported to decrease in parallel with the increment of glaucoma severity [8,9]. Glaucoma is often underdiagnosed, or diagnosis occurs at a later stage [10]. Antiglaucoma treatments aim to reduce and prevent further damage to the optic nerve and to preserve the residual visual capacity [7]. The most recent European Glaucoma Society (ESG) Guidelines advise the strategy proven to be effective focuses on lowering IOP. Treatments available are represented by medication, laser or surgery [7,10]. Pharmacological treatments, i.e., topical ophthalmic drops as monotherapy are considered to be first line therapy [11]. In case of lack of efficacy or intolerance, switching to a second drug in monotherapy or combination is advised. Among second line option, trabeculoplasty may also be considered, and the most recent guidelines recommended that trabeculoplasty should be considered as an option for initial treatment in mild or moderate phases of open angle glaucoma [11]. For patients using medication, ensuring an optimal level of adherence and persistence to treatments is essential to reduce risk of disease progression [12,13]. Indeed, good adherence and persistence are key points to obtain the beneficial effect of glaucoma therapy, by lowering IOP to prevent vision loss. Sub-optimal levels of adherence and persistence represent risk factors for disease progression, and it has been described in the literature that patients with stable visual fields were more than $75\%$ adherent to their therapy, while patients with a worsening of their condition were less than $45\%$ adherent [14]. Large evidence showed poor adherence to the prescribed topical drops for glaucoma treatment, when compared to medication adherence for other systemic chronic conditions [13,15,16]. Several methodologies for adherence evaluation have been reported, some of which are self-report, pharmacy refill reports, electronic monitoring and direct observation [15], but to date there is not a clear pattern on what method correlates best with clinically outcomes. Moreover, instrument scoring systems have been introduced and have been shown to predict the actual glaucoma medication adherence [17]. To date, little evidence is available on the drug utilization, characteristics and economic burden of patients with glaucoma in Italy. Hence, the analysis aims to evaluate the characteristics of patients with glaucoma, to describe their diagnostic and therapeutic paths, to assess the drug utilization of ophthalmic drops used by these patients and to analyze the health care resource use and related direct costs for Italian National Health Service (INHS) in clinical practice in Italy. ## 2.1. Data Source This is a retrospective observational analysis that used data collected from the administrative health databases of Italian Local Health Units (LHUs) from Puglia, Campania, Umbria, Lazio and Veneto Regions, covering around 2.7 million health-assisted subjects. Such databases store information on all healthcare resources reimbursed by the INHS. The databases used to perform the analysis were: demographic database (to get data on age and sex), pharmaceutical database (with data related to drugs dispensed, such as Anatomical–Therapeutic Chemical [ATC] code, number of packages, number of units per package, costs and prescription date), hospitalization database (including discharge diagnosis codes classified according to the International Classification of Diseases, Ninth Revision, Clinical Modification [ICD-9-CM], diagnosis Related Group [DRG] and DRG related charge), outpatient specialist services database (containing data on type, description activity of diagnostic tests and specialist visits for patients in analysis) and payment exemption database (containing date and type of exemption). An anonymous univocal patient ID was assigned by the LHUs to each health-assisted subject to ensure patient privacy. This ID allowed us to perform the electronic linkage between the databases. The anonymity process was in full compliance with UE Data Privacy Regulation $\frac{2016}{679}$ (“GDPR”) and Italian D.lgs. n. $\frac{196}{2003}$, as amended by D.lgs. n. $\frac{101}{2018.}$ Aggregated results are herein reported, so that it is not possible to connect to individual patients. ## 2.2. Patient Population All records of adult patients (≥18 years old) with at least 1 prescription for ophthalmic drops belonging to the class of ATC S01E (antiglaucoma preparations and miotics) during all data availability period, which spanned from January 2010 to June 2021, were screened for inclusion. Among them, patients with glaucoma were detected during the inclusion period January 2011−June 2020 by the presence of at least one of the following criteria (not necessarily after the ophthalmic drops prescription): (i) presence of hospitalization discharge diagnosis for glaucoma (ICD-9-CM: 365); (ii) an active exemption code for glaucoma (code 019); (iii) procedure for trabeculectomy (codes 12.64 OR 12.54) or trabeculoplasty (code 12.59) (as proxy of diagnosis). The index date was the date of the first ophthalmic drops prescription. All patients included in the analysis had at least 12 months of data availability period prior and afterward the index date, while those with missing data were excluded. Follow-up went from index date to end of data availability period or death (whichever occurred first). ## 2.3. Baseline Patient’ Characteristics At index date, data on age and sex were analyzed. Presence of comorbidities was investigated in the year prior index date by evaluating the Charlson Comorbidity index [18], which gives a score based on the presence of specific comorbidities identified by hospitalization discharge diagnosis and/or drugs treatment (therefore, untreated/hospitalized comorbidities are not captured). Moreover, the proportion of patients affected by the following conditions has been reported: hypertension (at least one antihypertensive drugs prescription, ATC codes: C02, C03; C07; C08; C09), dyslipidemia (at least one lipid modifying agents prescription, ATC code: C10); diabetes (at least one antidiabetic drugs prescription, ATC code A10); cataract (ICD-9-CM code 366 or procedure codes 13.2, 13.3, 13.4, 13.6, 13.71); blindness (ICD-9-CM code 369 or exemption code C05); retinal/choroid disorders (ICD-9-CM code 361, 362, 363); diabetic retinopathy (DR): (ICD-9-CM code: 362.0); wet age-related macular degeneration (wAMD) (ICD-9-CM code 362.52); retinal vein occlusion (RVO) (ICD-9-CM code 362.3), Parkinson’s disease (ICD-9-CM code 332 or exemption code 038); Alzheimer’s disease (ICD-9-CM code 331.0 or exemption code 029); rheumatoid arthritis (ICD-9-CM code 714.0 or exemption code 006). Since comorbidities were identified based on hospitalization/treatment reimbursed by the INHS, they could have been underestimated. Follow-up. Treatment line identification was performed and considered the whole analysis period. The number of lines was identified by presence of ophthalmic drops alone or in combination. Switching from one ophthalmic agent to another one was defined as change of line. Trabeculectomy and trabeculoplasty were considered as distinct treatment lines. The drug utilization was assessed by evaluating persistence, adherence and discontinuation of ophthalmic drops. Specifically, persistence was defined as presence of any ophthalmic drop prescriptions during the last quarter of 12 months follow-up. Discontinuation was identified as the absence of ophthalmic drops treatment prescriptions during the last trimester of 12 months follow-up period (interruption) or switching to another ophthalmic drops treatment (switch). Adherence to ophthalmic drops treatment was calculated during the first 12 months of follow-up by using the proportion of days covered (PDC), i.e., the ratio between the number of days of medication supplied and the observed time. Patients were classified as adherent (PDC ≥ 80), partially adherent (40 ≤ PDC < 80) and poorly adherent (PDC < $40\%$) [13]. Adherence was calculated based on prescriptions, and the actual use made by the patient is unknown. ## 2.4. Healthcare Resource Consumption and Costs The analyses on healthcare resource consumption and costs were performed over the first year of follow-up on alive patients. Healthcare resource consumptions were reported as annual mean (and standard deviation, SD) number of all drug prescriptions, all-cause hospitalizations, all outpatient services per patient. Direct medical costs related to the healthcare resource consumption described above were reported in Euros (€) as annual mean with SD cost per patient. Drug costs were evaluated based on the INHS purchase price. Hospitalization costs were determined using DRG tariffs, which represent the reimbursement levels by the INHS to healthcare providers. Healthcare costs related to specialist visits, and diagnostic services were defined according to the tariffs of each region (called Nomenclatore tariffario regionale). ## 2.5. Statistical Analysis All analyses were descriptive. Categorical variables have been reported as numbers and percentages, continuous variables as mean with SD. Patients with values exceeding the mean value three times the SD were excluded from the cost analysis. Following the “Opinion $\frac{05}{2014}$ on Anonymization Techniques” drafted by the “European Commission Article 29 Working Party”, the analyses involving ≤ 3 patients were not reported (NR) for data privacy, as they were potentially traceable to single individuals. All analyses have been performed using STATA SE version 12.0. ## 3. Results From a sample population of around 2.7 million health-assisted subjects, 105,948 users of ophthalmic drops were identified, and among them 18,161 patients had evidence of glaucoma based on the criteria applied and were therefore included (Figure 1). Characteristics were reported in Table 1: $44\%$ of patients was male and mean age was 67 years. The most populated age ranges were those 65−74 years ($28.1\%$), 75−84 years ($23.6\%$) and 55−64 years ($20.4\%$). Mean Charlson Index was 0.9, with around $22\%$ of patients showing a score ≥2 indicating a moderate-severe comorbid profile. Hypertension was the comorbidity most frequently detected ($60.2\%$) followed by dyslipidemia ($29.7\%$) and diabetes ($17\%$). Regarding eye-related diseases, cataract was observed in $8.9\%$ of patients while $0.7\%$ was blind. At index date, $30.3\%$ of patients received prostaglandin analogues, $30\%$ had fixed combination, mostly timolol-based, while $25.7\%$ received beta blocking agents, $10.8\%$ carbonic anhydrase inhibitors and $3\%$ sympatico mimetics (Figure 2). During all the period analyzed, $11.5\%$ of patients had a trabeculectomy, $2\%$ a trabeculoplasty. Patients that underwent trabeculoplasty were older compared to those treated with drops only and showed a higher level of comorbidity profile (Table 1). Of all the patients included ($$n = 18$$,161), by considering all available period, $70\%$ ($$n = 12$$,754) had a second line of therapy and $57\%$ ($$n = 10$$,394) a third line. Lines of therapy were mainly represented by ophthalmic drugs, and therapeutic sequences are reported in Table 2. As first line, $96.3\%$ patients had ophthalmic drops, while only a small proportion of patients reported trabeculectomy ($3.5\%$) or trabeculoplasty ($0.4\%$). The majority of patients ($66\%$) with ophthalmic drops as first line switched to another ophthalmic therapy, while $2.8\%$ had a trabeculectomy procedure and $0.4\%$ a trabeculoplasty. All patients with trabeculectomy or trabeculoplasty as first line had ophthalmic drops as second line, while as third line a second procedure was found, respectively, in $11.6\%$ and $13\%$ of patients (Table 2). Regarding drug utilization, during first year of follow-up, $58.3\%$ were adherent to ophthalmic drops, $25.6\%$ partially adherent and $16.1\%$ poorly adherent (Table 3). Persistence to ophthalmic medication interested $78.1\%$ of patients while the remaining $21.9\%$ interrupted the therapy. Around $42\%$ of patients switched the index ophthalmic drugs during the first year of follow-up. The analysis on mean annual resource consumption and costs during first year of follow-up revealed a mean annual number of 17 prescriptions, 6.4 outpatient specialist services and a mean of 0.3 all-cause hospitalization. The mean total annual direct cost per patients was €1,725, related mostly to all-cause drug expenditure (€800) followed by all-cause hospitalizations (€567) and outpatient services (€359) (Figure 3). ## 4. Discussion This analysis on real-world data provided insights into characteristics of glaucoma patients, their therapeutic paths, and health care resource use and related direct costs for INHS in Italian settings of clinical practice. Among 2.4 million health-assisted individuals, almost 18,000 glaucoma patients under ophthalmic drops were included in the analysis, with a prevalence of $0.67\%$. In *Europe glaucoma* prevalence is almost $2.5\%$ [6], and in Italy 550,000 individuals are estimated to have received a diagnosis for glaucoma [7], and prevalence rates of $2.51\%$ of Primary Open Angle Glaucoma, $0.97\%$ of Primary Closed Angle Glaucoma and $0.29\%$ of secondary glaucoma were estimated [19]. The discrepancy between our data and published reports is feasibly attributable to the fact that in the present analysis glaucoma patients were identified by treatment prescription and not by a direct diagnosis. The analysis of patient’ characteristics revealed a mean age of 66 years and almost $60\%$ being female; these data were in line with other real-world studies reporting the same mean age and a slight female predominance [6]. The comorbidity profile of these patients showed a higher frequency of hypertension and dyslipidemia in almost 20−$30\%$ of patients; data from an observational Italian study reported the most frequent comorbidities (self-reported) were systemic hypertension ($53.2\%$) and hyperlipidemia ($26.2\%$), similar to our findings [9]. All these comorbidities indicate a polypharmacy tendency for these patients, suggesting paying attention to avoid drug–drug interaction in patients prescribed multiple drugs and that an individualized management should be considered that integrates anti-glaucoma agents into the overall treatment plan [20]. In the present analysis all glaucoma patients under ophthalmic drops were included, being by definition in first-line treatment. Most of the patients ($86.7\%$) were under ophthalmic drops as monotherapy, as per guidelines [10]. It has been extensively reported that adherence to glaucoma medication could be a challenging problem [12]. Adherence to ophthalmic medication is poor, and multiple factors have been identified, including more recurrent and complex dosing, as well as patient-centered factors, such as poor disease or health consciousness, and a passive learning style [21]. Medication adherence plays an essential role alongside several factors such as clinical benefit, economic burden and quality of life of a patient [18,22,23]. In our analysis we have found that $58.3\%$ were adherent, 25.6 % partially adherent and $16.1\%$ poorly adherent. The latter value is within the rates of nonadherence with glaucoma medications found in the literature, that span from $16\%$ to $30\%$ [24]. It should be underlined that our analysis was limited to one year of observation but given that glaucoma is a chronic condition requiring a life-long treatment, studies with longer follow-up have shown that therapy adherence tends to further decrease over the years [23]. Similarly, persistence also ranged from $69\%$ to $84\%$, according to European studies [24,25]. A proper drug utilization, namely optimization of adherence and persistence to treatment, may provide a decrease in the healthcare burden of patients. The analysis of healthcare resource consumption and cost showed that medication expenditures were found to be the main driving force, accounting for $46\%$ of total costs. In other European counties, treatment costs for patients with glaucoma has been reported to range between $42\%$−$56\%$ of total direct cost for patients in all stages of glaucoma [23]. Moreover, it has been reported that the economic burden of glaucoma increases with disease severity. An analysis performed in Europe showed an increase of around €86 on total cost for each progression in glaucoma stage, from €455 (stage 0) to €969 (stage 4) per person year [26]. The present analysis has some limitations related to its observational and retrospective design and to the data source. Indeed, administrative databases are primarily intended for administrative purpose, even if their utilization for healthcare research is increasing over the years. Some limitations are related to the lack of clinical data within the database therefore, it was not possible to retrieve information on the status of glaucoma, level of severity, nor type of glaucoma. Furthermore, the identification of patients was made by presence of ophthalmic drugs; therefore, untreated patients were not captured. The comorbidities were observed during all data availability periods before inclusion; therefore, variations and incomplete capture of these variables could have been present among patients. Drug utilization is based on drug dispenses; therefore, reasons behind choice of therapy or switch are not collected. Minimally invasive glaucoma surgery (MIGS) was not identified as, to date, there is no code for reimbursement structure for MIGS available in Italy [7]. ## 5. Conclusions This real-world analysis depicted the characteristics, therapeutic path and economic burden of glaucoma patients under ophthalmic drops in Italy by means of administrative data. The vast majority of treated patients were under ophthalmic medication in monotherapy. Drug utilization analyses reveal poor adherence and persistence below $80\%$. Results were consistent with the literature, while the low prevalence reported could be explained by the methodology applied, since the analysis focused on glaucoma patients in treatment. Patients’ management was associated with healthcare resource consumption and costs mostly related to drug prescriptions. Although this result could depend on the fact all patents were treated, this trend adds to the growing body of knowledge that treatments are a major driving force for glaucoma patients. Overall, these real-life data advise that strategies to optimize glaucoma management should be focused on ensuring a proper drug utilization; efforts to increase the adherence and persistence to ophthalmic medication has been widely reported to enhance the likelihood to get benefit from the therapy. Furthermore, we have shown a complex therapeutic pattern for these patients, that move towards multiple line of therapy, and, in addition, displayed a comorbid profile requiring a polytherapy regimen with risk of drug-drug interaction, suggesting an unmet therapeutic need that should be taken into account in the development of new treatments/techniques for glaucoma.
# Methylglyoxal-Modified Albumin Effects on Endothelial Arginase Enzyme and Vascular Function ## Abstract Advanced glycation end products (AGEs) contribute significantly to vascular dysfunction (VD) in diabetes. Decreased nitric oxide (NO) is a hallmark in VD. In endothelial cells, NO is produced by endothelial NO synthase (eNOS) from L-arginine. Arginase competes with NOS for L-arginine to produce urea and ornithine, limiting NO production. Arginase upregulation was reported in hyperglycemia; however, AGEs’ role in arginase regulation is unknown. Here, we investigated the effects of methylglyoxal-modified albumin (MGA) on arginase activity and protein expression in mouse aortic endothelial cells (MAEC) and on vascular function in mice aortas. Exposure of MAEC to MGA increased arginase activity, which was abrogated by MEK/ERK$\frac{1}{2}$ inhibitor, p38 MAPK inhibitor, and ABH (arginase inhibitor). Immunodetection of arginase revealed MGA-induced protein expression for arginase I. In aortic rings, MGA pretreatment impaired acetylcholine (ACh)-induced vasorelaxation, which was reversed by ABH. Intracellular NO detection by DAF-2DA revealed blunted ACh-induced NO production with MGA treatment that was reversed by ABH. In conclusion, AGEs increase arginase activity probably through the ERK$\frac{1}{2}$/p38 MAPK pathway due to increased arginase I expression. Furthermore, AGEs impair vascular function that can be reversed by arginase inhibition. Therefore, AGEs may be pivotal in arginase deleterious effects in diabetic VD, providing a novel therapeutic target. ## 1. Introduction Vascular dysfunction (VD) contributes to several diabetic complications and its pathophysiology is intricately linked to oxidative stress and inflammation. Advanced glycation end products (AGE) and arginase enzyme have been shown separately to play roles in VD; however, the relationship between these two factors in diabetic VD is not yet clear. Arginase is well demonstrated as an important enzyme in urea cycle, detoxifying ammonia by hydrolyzing L-arginine to ornithine and urea. There are two identified isoforms encoded by different genes, arginase I and II; however, they share similar mechanisms and metabolites [1,2]. Arginase is constitutively expressed in human endothelial cells in both isoforms, where arginase I is located in the cytosol, and arginase II in mitochondria of human endothelial cells [3,4]. In addition to its role in the urea cycle, arginase produces ornithine required for polyamines and L-proline synthesis involved in cell proliferation, differentiation, and repair [5]. There is a growing body of evidence indicating that constitutive levels of arginase activity in endothelium limit NO synthesis and NO-dependent vasodilatory function [6,7,8]. Arginase was shown to be induced by various stimuli such as oxidative stress, oxidized lipoproteins, tumor necrosis factor (TNFα), and hypoxia [9,10,11,12,13,14]. Upregulation of arginase was also demonstrated in cells exposed to high glucose and in diabetic animal models. High glucose increased arginase activity and limited NO production in bovine coronary endothelial cells in a Rho-kinase-dependent pathway, in which siRNA knockdown of arginase I prevented high-glucose-induced changes [15]. Arginase upregulation was shown to be mediated by reactive oxygen species (ROS) and the PKC/Rho A pathway [9]. Interestingly, both arginase and endothelial nitric oxide synthase (eNOS) contributed to high-glucose-induced superoxide production, due to uncoupling of eNOS associated with diminished availability of L-arginine [9,16]. The functional impairment associated with increased arginase expression and activity in diabetes was demonstrated in isolated vascular preparations and under in vivo conditions [17]. Both mRNA expression and activity of arginase were increased in aorta and liver of a streptozotocin-induced diabetic rat model [15]. Impaired endothelium-dependent vasorelaxation of coronary arteries from rats with type 1 diabetes was normalized by arginase inhibition [15]. Moreover, aortic and retinal endothelial dysfunction in streptozotocin-induced type 1 diabetes was linked to increased arginase expression [18,19]. The role of arginase for vascular dysfunction in vivo was investigated in type 2 diabetic rats, in which arginase inhibition improved myocardial microvascular dysfunction by increased NO availability [20]. Additionally, arginase has been identified as a key player in skeletal muscle arteriolar endothelial dysfunction in a diabetic rat model, where inhibition of arginase restored flow-induced vasodilation [21]. Arginase upregulation and vasodilation impairment were reported in cavernous tissue of diabetic rats linked to extracellular signal–regulated kinase (ERK$\frac{1}{2}$) [22]. Clinical studies on diabetic patients supported earlier findings on animal studies indicating a significant role for arginase in endothelial dysfunction. Plasma arginase activity was elevated in patients with type 2 diabetes mellitus in comparison with healthy subjects and correlated positively with fasting plasma glucose levels and glycosylated hemoglobin HbA1c levels [23]. Furthermore, arginase levels in plasma were associated with markers of oxidative stress and HbA1c [23]. Functionally, coronary arterioles obtained from patients with diabetes displayed reduced endothelium-dependent relaxation in vitro and increased expression of arginase I in endothelial cells [24]. The endothelium-dependent vasodilatation of coronary arterioles was enhanced by arginase inhibition [24]. In addition, an in vivo study demonstrated that arginase inhibition markedly improves endothelium-dependent vasodilatation in the forearm of patients with type 2 diabetes and coronary artery disease, while it does not affect endothelial function in healthy controls [25]. On the other hand, AGEs, the products of non-enzymatic glycation and oxidation of proteins and lipids that accumulate in diabetes, together with their signal transduction receptor (RAGE), are linked to both the etiology and pathological consequences of types 1 and 2 diabetes [26,27]. AGEs form to an accelerated degree in hyperglycemia and accumulate in the blood vessel wall, directly modifying proteins by the formation of cross-links primarily in the basement membrane and the extracellular matrix [26,27]. Furthermore, circulating AGEs interact with endothelial RAGEs to transduce multiple signaling pathways, which lead to perturbation of cellular functions [27]. RAGE is a member of the immunoglobulin superfamily that binds to multiple ligands such as AGEs, HMGB-1, S100 proteins, or amyloid beta peptide [28,29,30]. Engagement of RAGE to its agonists activates several pathways that result in activating NADPH oxidases, ROS production, ERK, P38 MAP-kinase, JAK/STAT pathway, phospho-inisitol-3 kinases, and NfκB pathway, which culminate in the upregulation of RAGE and other profibrotic and proinflammatory target genes [27]. Clinically, the levels of serum AGEs in patients with type 2 diabetes are inversely related to the degree of endothelium-dependent and endothelium-independent vasodilation [31]. Several mechanisms by which AGEs affect NO bioavailability were suggested in the literature and mostly relate to eNOS. AGEs may reduce the stability of eNOS or impair NO production via RAGE-induced deactivation of the eNOS enzyme [32,33]. To our knowledge, it is still not clear if AGEs directly affect arginase activity, arginase expression, or NO bioavailability in endothelial cells. Given that AGEs via RAGE induce ROS formation and ERK$\frac{1}{2}$ activation, which are also signaling pathways implicated in arginase stimulation in diabetic vasculature, as shown previously, we sought to investigate the effect of AGE (MGA) on arginase activity and expression. We hypothesized that AGEs may upregulate arginase enzymes, leading to a reduction in the availability of arginine and NO, thus causing deleterious effects on vascular function. ## 2.1. Cell Culture and Treatments In all cell experiments, mouse aortic endothelial cells (MAECs) were utilized. Proliferating MAECs were purchased from Cell Applications, San Diego, CA, USA. Cells were cultured in Endothelial Growth Medium (Cell Applications, San Diego, CA, USA) and maintained in a humidified atmosphere at 37 °C and $5\%$ CO2. Cells were adapted to grow in M199 supplemented with 50 µM L-arginine (Invitrogen, Carlsbad, CA, USA) for 72 h before the experiment to match the normal plasma L-arginine concentration (40 to 100 µM). In addition, $10\%$ FBS (Catalog # SH30396, hyClone, GE Healthcare Life Sciences South Logan, UT, USA), $1\%$ penicillin/streptomycin, and $1\%$ L-glutamine were added to cell growth medium. Cells used for experiments are from 3 to 9 passage numbers. When cells reached $80\%$ confluency, they were serum-starved overnight in M199 supplemented with 50 µM L-arginine, $1\%$ L-glutamine, $1\%$ penicillin/streptomycin, and $0.2\%$ FBS. Glycated albumin (MGA) was prepared as described and characterized previously [34,35]. Briefly, 500 μM methylglyoxal (Sigma, Catalog #M0252, St. Louis, MO, USA) was incubated with 100 μM BSA (Sigma) dissolved in phosphate-buffered saline (PBS) for 24 h, then washed on 10 kDa filters (Macrosep® Advance Device, Pall Life Sciences, MI, USA) to remove excess methylglyoxal, reconstituted with M199 serum-free media, and passed through a 0.2 μm filter [34,35]. In subsets of cells, the inhibitors for arginase, namely boronic acids 2(S)-amino-6-boronohexanoic acid (ABH) (1 mM, ChemCruz, Catalog #221197, Dallas, TX, USA), p38 MAPK, SB-202190 (10 µM) (EMD biosciences, Catalog #S7076, San Diego, CA, USA), and mitogen-activated protein kinase kinase MEK/ERK$\frac{1}{2}$, PD98059 (EMD biosciences, Catalog #P215, San Diego, CA, USA) (10 µM), were used and added 2 h before the addition of MGA (100 µM) (Sigma-Aldrich, St. Louis, MO, USA) for 24 h; inhibitor concentrations and durations were as previously described [36]. Independent experiments (3–5) were carried out from different passages. ## 2.2. Arginase Activity Arginase activity was measured using a colorimetric determination of urea production from L-arginine as described previously [37]. Cells were lysed in Tris buffer (50 mM Tris-HCI, 0.1 mM EDTA and EGTA, pH 7.5) containing protease inhibitors (Catalog # P8340, Sigma, St. Louis, MO, USA). These mixtures were subjected to three freeze–thaw cycles and then centrifuged for 10 min at 20,000× g. The supernatants were used for arginase activity assay. In brief, 25 µL of supernatant was heated with MnCl2 (10 mM) for 10 m at 56 °C to activate arginase. The mixture was then incubated with 50 µL L-arginine (0.5 M, pH 9.7) for one hour at 37 °C to hydrolyze the L-arginine. The hydrolysis reaction was stopped with acid and the mixture was then heated at 100 °C with 25 µL of α-isonitrosopropiophenone ($9\%$ α-ISPF in EtOH) for 45 min. The samples were kept in the dark at room temperature for 10 min; then, absorbance was measured at 540 nm. ## 2.3. Immunodetection of Arginase Cells were lysed in RIPA buffer (#ab156034, Abcam, Boston, MA, USA) having protease and phosphatase inhibitors (Catalog #P5726 and P0044, Sigma, St. Louis, MO, USA). Cell lysates were centrifuged for 10 min at 20,000× g, and supernatants were collected for Western blotting analysis. Protein estimation was conducted in supernatants using a protein assay kit (Bio Rad, Hercules, CA, USA). Equal amounts of protein were loaded, separated by electrophoresis using $10\%$ SDS-PAGE gels, and transferred into nitrocellulose membranes. The blots were blocked using $5\%$ bovine serum albumin (Sigma, St. Louis, MO, USA), incubated with their respective primary and secondary antibodies, anti-arginase 1 (Santa Cruz, Catalog #166920, 1:1000, Dallas, TX, USA), anti-arginase-2 (Santa Cruz, Catalog #393496, 1:1000, Dallas, TX, USA), anti-GAPDH (Catalog #abx005569, 1:10,000, abbexa, Cambridge, UK), followed by the respective secondary antibodies. Signals were detected using chemiluminescence (PierceTM ECL Western, Thermophisher, IL, USA) and the ChemiDoc MP imaging system (Bio-Rad, Hercules, CA, USA). To quantify the resultant blots, individual band intensities were measured (arbitrary units) and ratios of protein to GAPDH were calculated per sample using NIH ImageJ softwareversion 1.53. ## 2.4. Histochemical Detection of Intracellular NO For the detection of intracellular NO, endothelial cells (1.2 × 105 cells) were plated on a non-coated cover slide (18 × 18 mm) and starved for 24 h prior to treatment; cells were treated with either bovine serum albumin (100 μM) or MGA (100 μM) for 24 h. For cells with inhibition conditions, inhibitors L-NAME (Abcam, Catalog #120136, 1 mM, UK) or ABH (1 mM) were added 30 min before the addition of incubation media (DAF-2DA, Catalog #ab145283, 5 μM, for 40 min, Abcam, in serum-free media) according to the manufacturer’s instructions and as previously described [38]. To promote NO generation by NOS, subsets of cells were treated with acetylcholine (1 μM, Sigma) and L-arginine (1 mM, Sigma) to intensify the signal during the 40 min incubation. Then, cells were washed with PBS twice and fixed in $2\%$ paraformaldehyde for 3 min at 0 °C, and mounted on a slide with mounting media as reported previously [39]. Cells were directly observed under an inverted fluorescence microscope (AxioObserver. Z1; Zeiss, Jena, Germany). The quantification of fluorescence intensity of representative images from 3 independent experiments was carried out using NIH ImageJ software version 1.53. ## 2.5. Animals Vascular function experiments were performed on aortas obtained from C57BL/6J wild-type mice aged 10 months. Protocols were approved by the Institutional Animal Care and Use Committee of the Medical College of Georgia (Animal Welfare Assurance no. D16-00197). ## 2.6. Vascular Function Vascular function was assessed as described previously [40]. Following deep anesthesia, tissues were harvested, and mouse aortas were rapidly excised and placed immediately in ice-cold Krebs–Henseleit buffer (NaCl, 118 mM; NaHCO3, 25 mM; glucose, 5.6 mM; KCl, 4.7 mM; KH2PO4, 1.2 mM; MgSO4 7H2O, 1.17 mM and CaCl2 2H2O, 2.5 mM), cleaned, and cut into 2–3 mm segments. Thereafter, aortic rings were placed in M199 serum-free media supplied with 50 μM L-arginine with or without the addition of MGA and the arginase inhibitor (ABH, 1 mM) for 24 h at 37 °C in culture chambers. Aortic rings (3–4 for each condition) were mounted in an oxygenated wire myograph chamber (Danish Myo Technology, Ann Arbor, MI, USA). Tissues were allowed to equilibrate at a resting tension of 5 mN for 1 h with buffer changes. Following phenylephrine (1 μM) precontraction, relaxation curves were performed using progressive doses of acetylcholine (ACh, endothelium-dependent vasodilator) or sodium nitroprusside (SNP, endothelium-independent vasodilator). Changes in tension were measured by a force transducer. A 1 h equilibration was performed between subsequent relaxation curves. Vasorelaxation responses were calculated as the percentage of phenylephrine-induced contraction. ## 2.7. Statistical Analysis Data are given as mean ± SEM. For multiple comparisons, statistical analysis was performed by one-way analysis of variance (ANOVA) with the Tukey post test. For single comparisons, statistical differences were determined by the Student T test. Differences in concentration–response curves were determined using two-way repeated measures ANOVA. Independent experiments were performed 3–6 times. All statistical analyses were performed with GraphPad Prism version 8.01 (San Diego, CA, USA). Results were considered significant when $p \leq 0.05.$ ## 3.1. Arginase Activity Treatment of endothelial cells (MAEC) with (100 μM, 24 h) MGA increased arginase activity by $64\%$ compared to the control BSA-treated cells ($p \leq 0.001$), as shown in Figure 1. This increase was abrogated when cells were pretreated with the inhibitor of p38 MAPK, SB-202190 (10 µM), or the inhibitor of MEK/ERK$\frac{1}{2}$, PD98059 (10 µM), or the inhibitor of arginase, ABH (1 mM); $$n = 5$$ independent experiments. ## 3.2. Arginase Expression MGA treatment (100 μM, 24 h) increased arginase I immunodetected protein expression by $41.6\%$ ($p \leq 0.05$, $$n = 5$$) compared to control BSA conditions, as shown in Figure 2A; however, arginase II expression was not altered, as demonstrated in Figure 2B. These findings indicate that arginase I is the isoform that mainly contributed to the increased arginase activity shown in this study. ## 3.3. Histochemical Detection of Intracellular NO Intracellular NO generation was assessed in MAECs utilizing the DAF-2DA marker. Subsets of cells were treated with BSA as a control (100 μM, 24 h) (Figure 3A); the addition of ACh (1 μM) to BSA-treated cells induced an increase in the DAF-2DA fluorescence, reflecting NO generation (Figure 3B) compared with no ACh in Figure 3A. Pretreatment with L-NAME (1 mM) reduced ACh-induced NO production (Figure 3C), while ACh-induced NO production increased with pretreatment with the arginase inhibitor ABH (1 mM) (Figure 3D). Another subset of cells were pretreated with MGA (100 μM, 24 h), which demonstrated nearly undetectable fluorescence without ACh stimulation (Figure 3E); NO production increased slightly after the addition of ACh in MGA-treated cells (Figure 3F), whereas the L-NAME inhibitor blunted NO production in ACh-stimulated, MGA-treated cells (Figure 3G). Interestingly, pretreatment with the ABH inhibitor rescued NO production to close to the control ACh-stimulated cells (Figure 3H). A quantification of DAF fluorescence intensity in the different treatment conditions is depicted in Figure 3I). It is noteworthy that ABH restoration of ACh-induced NO production, indicated by increased fluorescence intensity in BSA treatment, was reversed by L-NAME inhibition to a level less than when ABH was not used, while eNOS was inhibited by L-NAME, confirming that this effect of ABH is rather due to the inhibition of arginase enzyme and not the stimulation of eNOS (Figure 3I). ## 3.4. Vascular Function To determine the effect of MGA on endothelial function in vivo, we performed vascular studies using aortas isolated from C57BL/6J healthy mice. We examined vasorelaxation responses to the endothelium-dependent vasodilator ACh and the endothelium-independent vasodilator SNP (Figure 4). Pretreatment of isolated aortas with MGA (100 μM, 24 h) induced an impairment of vasorelaxation response to ACh (maximum relaxation of 39.7 ± $5.7\%$ vs. 90.7 ± $1.7\%$ in control condition, $p \leq 0.05$, $$n = 3$$–5 independent experiments), as shown in Figure 4A. ABH largely prevented MGA-impaired vasorelaxation with a maximum relaxation of 80.4 ± $5.3\%$, $p \leq 0.05$, $$n = 3$$–5 independent experiments. Thus, blocking arginase activity reversed MGA-induced impairment. Aortic relaxation responses to SNP were not different between control, MGA-treated rings or ABH- and MGA-treated rings, as demonstrated in Figure 4B. ABH pretreatment of control rings (BSA) did not affect vasorelaxation responses to either ACh or SNP (data not shown). ## 4. Discussion This study demonstrates for the first time that advanced glycated end products represented by methylglyoxal-modified albumin stimulates arginase enzyme activity in an ERK$\frac{1}{2}$ MEKK and p38 MAPK-dependent pathway, as summarized in Figure 5. Increased activity is mainly due to increased arginase I expression, as shown in our study. Our findings support previous reports showing that constitutive levels of arginase activity in endothelial cells limit NO synthesis and NO-dependent vasodilatory function [6,7,8]. In hyperglycemic conditions, both AGEs and arginase have been individually linked to various diabetic complications, including vascular dysfunction; however, in the literature, there is a lack of studies investigating if there is a direct influence of AGEs on arginase regulation. Previously, AGE-modified albumin was shown to have suppressive effects on NOS-3 activity and expression in HUVECs, an effect that if combined with upregulation of arginase, would aggravate limited NO bioavailability and VD [41]. Intracellular detection of NO in cultured endothelial cells in our study showed that MGA-induced increased activity and expression of arginase was accompanied by a reduction in NO bioavailability. Furthermore, we show that MGA treatment of aortic rings impaired endothelial-dependent vasodilation in response to ACh, which was reversed by the arginase inhibition (ABH) without affecting SNP-induced (endothelial-independent) vasorelaxation, suggesting a role for endothelial arginase enzyme in MGA-induced vascular impairment. In accordance with these findings, aortic rings treated with AGE demonstrated blunted endothelial-dependent vasorelaxation. These findings were consistent with a previous report by Watson’s group in which AGE treatment of rat aortic rings impaired endothelial-dependent vasodilation that was blocked by inhibition arginase, NADPH oxidase, and superoxide [42]. We showed no alteration of endothelial-independent relaxation; however, they showed increased endothelial independent vasodilation by AGE [42]. Furthermore, they reported increased arginase and NADPH oxidase mRNA expression with MGA treatment, which may not be necessarily predictive for protein expression. On the contrary, our study showed an increase in both activity and protein expression of arginase enzyme upon MGA treatment. Similar to our findings, coronary arteries obtained from diabetic patients had increased protein levels of arginase I and showed a better vasodilation response to ACh in the presence of the arginase inhibitor [24]. Moreover, we provide evidence of reduced NO production using the intracellular marker DAF-2DA, whereas arginase inhibition with ABH restored ACh-induced NO production in cultured endothelial cells treated with MGA, which explains our vascular function findings. In concordance with our findings that arginase I expression was preferentially increased by AGEs, arginase knockout mice models suggested that arginase I is crucial in diabetes-induced vascular dysfunction. One study showed that streptozotocin-induced diabetic knockout mice lacking the arginase II with partial deletion of arginase I exhibited better endothelial-dependent vasodilation and less arginase activity compared with diabetic wild-type and knockout mice lacking the AII isoform alone [18]. A growing body of evidence indicates that AGE receptor (RAGE) engagement by its ligands including AGE stimulate NADPH oxidase, reactive oxygen species (ROS) production, ERK$\frac{1}{2}$, P38 MAP-kinase, NFκB activation, and gene transcription, culminating in microvasculature alterations manifested in diabetes [43,44,45]. Arginase expression/activity has been extensively shown to be stimulated by a wide range of stimuli involving oxidative stress when administered to cultured endothelial cells, including high glucose [15], oxidized low-density lipoprotein (LDL) [12], H2O2 [5,46,47], peroxynitrite [9], and endotoxins [10]. Additionally, in vivo studies revealed that conditions well known to be associated with elevated oxidative stress have elevated endothelial arginase expression, such as ischemia–reperfusion [48] and ageing [49]. Moreover, AGEs via RAGE receptors as well as arginase-induced eNOS uncoupling may lead to ROS formation, including superoxide (O2-) ion, which further combines with NO to form the potent oxidant peroxynitrite, limiting NO bioavailability and aggravating the oxidative injury to endothelial cells [50]. Taken together, AGE-induced arginase upregulation might result from AGE-stimulated ROS formation and might contribute to AGE-induced ROS loop at the same time. Arginase activation was linked to protein kinase C (PKC), Rho-associated protein kinase (ROCK), and the mitogen-activated protein kinase (MAPK) pathways [9,51,52]. Post-translational modifications such as S-nitrosylation of arginase I via inducible NOS2 have been identified in age-related endothelial dysfunction [53]. In addition, the physiologic modulation of the glutathione/glutathione disulfide ratio has been suggested to play a role in the control of arginase I activity in pathological conditions of increased oxidative stress [13]. Although we show no changes in protein expression of arginase II, it may contribute to increased arginase activity by other activating mechanisms. Pandey et al. demonstrated a mechanism for rapid arginase II increased activity via translocation from mitochondria to cytoplasm in response to oxidized LDL interaction with LOX1 receptor causing NO dysregulation and vascular dysfunction [54]. AGEs were reported to bind the LOX1 receptor, presenting a compelling mechanism for arginase II contribution to increased arginase activity that requires further investigation [55,56]. In concordance to the previous evidence that hyperglycemia-induced dysregulation of NO and increased generation of ROS as well as endothelial dysfunction are maintained even after the restoration of normoglycemia, known as hyperglycemic memory phenomenon, we observed from previous studies that the degree of endothelial function improvement achieved by arginase inhibition was independent of glucose control, which can be partly explained by the role of the AGEs/RAGE axis involved in this phenomenon [57,58,59]. These intriguing observations highlight the role of AGE in arginase regulation of NO and oxidative stress, which may present a putative therapeutic target to maintain cardiovascular integrity and function in diabetes. ## 5. Conclusions Based on our findings, we conclude that AGEs affect VD by upregulating arginase activity and expression, thus limiting NO bioavailability in endothelial cells. This study emphasizes the importance of further investigating the interaction between AGEs and arginase enzymes, particularly in diabetes.
# Fast Eating Speed Could Be Associated with HbA1c and Salt Intake Even after Adjusting for Oral Health Status: A Cross-Sectional Study ## Abstract This study aimed to examine the relationship between eating speed and hemoglobin A1c (HbA1c), considering the number of teeth, using cross-sectional health examination data from community-dwelling older individuals in Japan. We used data from the Center for Community-Based Healthcare Research and Education Study in 2019. We collected data on gender, age, body mass index, blood test results, Salt intake, bone mineral density, body fat percentage, muscle mass, basal metabolic rate, number of teeth, and lifestyle information. Eating speed was evaluated subjectively as fast, normal, or slow. Overall, 702 participants were enrolled in the study and 481 participants were analyzed. Multivariate logistic regression analysis revealed a significant association between fast eating speed and being a male (odds ratio [$95\%$ confidence interval]: 2.15 [1.02–4.53]), HbA1c (1.60 [1.17–2.19]), salt intake (1.11 [1.01–1.22]), muscle mass (1.05 [1.00–1.09]), and enough sleep (1.60 [1.03–2.50]). Fast eating may be associated with overall health and lifestyle. The characteristics of fast eaters, after taking oral information into consideration, tended to increase the risk of type 2 diabetes, renal dysfunction, and hypertension. Dental professionals should provide dietary and lifestyle guidance to fast eaters. ## 1. Introduction In a large cohort study in Japan, unhealthy dietary habits were reported to affect obesity and other health conditions [1]. Examples of unhealthy eating habits are snack food consumption, late-night meal consumption, and fast eating [2,3]. Of these, fast eating has been noted to have the potential for widespread health effects [4]. *In* general, fast eating has been reported to be associated with systemic diseases such as obesity, type 2 diabetes, non-alcoholic fatty liver disease, and renal dysfunction [5,6,7]. Although there is no fixed definition of eating speed, in many studies, a patient’s subjective eating speed is important in the assessment of time [8]. The assessment of self-reported eating speed is also included in the self-administered diet history questionnaire (DHQ), which was developed to evaluate the dietary habits of the Japanese population and has been suggested to be related to eating habits and obesity levels [9]. A meta-analysis has reported a higher body mass index (BMI) when eating faster [8]. Eating speed has also been reported to be significantly associated with a higher risk of metabolic syndrome, elevated blood pressure, and obesity [10]. In addition, eating speed may be associated with proinflammatory cytokines (IL-1β) in Japanese men without metabolic diseases [11]. Conversely, slower eating speeds have been reported to reduce excess food and energy intake [12]. Therefore, eating speed, which is a lifestyle habit, is highly likely to have an impact on health. Focusing on type 2 diabetes, which is strongly associated with metabolic syndrome, it has been reported that fast eating speed is associated with a rapid increase in blood glucose levels [13,14]. Eating speed has also been implicated as an intermediate factor in obesity, and may be associated with diabetes [15]. In addition, some reports suggest that fast eating speeds double the increased risk of type 2 diabetes [16]. However, while gender, age, and BMI have been adjusted for as confounding factors in many studies, few studies have considered oral and dental health status. Whether oral and dental health status influences eating speed is controversial. The association between masticatory function and eating speed is contradicted by reports that people with fewer dental prostheses eat faster, while a study of 30,938 Japanese adults reported that masticatory difficulty was associated with higher hemoglobin A1c (HbA1c) [17]. Some studies reported that increasing the number of times of mastication decreased the speed of eating, whereas others reported that there was no relationship [18,19]. Next, regarding the relationship between the number of teeth and eating speed, a report found that the probability of having metabolic syndrome was 2.5 times higher in those with a small number of remaining teeth and fast eating than in those with a large number of remaining teeth and slow eating [20]. Additionally, a higher number of remaining teeth and slower eating speed have been reported to reduce the likelihood of metabolic syndrome in the older population [20]. Overall, previous studies suggest that eating speed and oral and dental health status may be related, but few studies have considered such oral and dental health-related factors in studies of eating speed [20,21]. Therefore, two hypotheses were formulated: first, eating speed is associated with oral and dental health-related status, and second, even after adjusting for oral and dental health-related status, systemic health conditions such as type 2 diabetes together with lifestyle conditions and eating speed are associated [20]. This study aimed to examine the relationship between eating speed and systemic health conditions, considering oral and dental health status, using cross-sectional health examination data from community-dwelling older individuals in Japan. ## 2.1. Data Collection This study used the same dataset as other reports because it is based on datasets obtained from health examinations [22,23]. However, the variables of interest for the analysis and the analysis methods were different. This study was approved by the Medical Research Ethics Committee of Shimane University Faculty of Medicine (number: 20220622-1). Written informed consent was obtained from all participants, and data were collected. ## 2.2. Center for Community-Based Healthcare Research and Education (CoHRE) Study The CoHRE *Study is* a cohort study conducted by the Shimane University Center for Community-based Healthcare Research and Education to predict and prevent lifestyle-related diseases in Ohnan-cho, Shimane Prefecture [22,24]. Surveys on health and medical information, various clinical examination information, lifestyle information, human relationship information, social resource information, and medical care cost information are ongoing. ## 2.3. Study Design This study used cross-sectional data from the 2019 Shimane CoHRE Study; the 2019 data are the most recent version of the dataset because surveys have not been conducted after 2019 due to the COVID-19 pandemic [22]. ## 2.4. Inclusion Criteria The inclusion criteria were as follows: residents covered by Japan National Health Insurance; residents of Ohnan-cho, a mid-mountainous area in Shimane Prefecture, Japan; and residents who participated in the 2019 survey [24]. ## 2.5. Exclusion Criteria Data from residents with missing values were excluded, and complete data were analyzed [22,23]. ## 2.6.1. Background Data In the CoHRE Study, data were collected annually through standardized questionnaires and physical measurements, blood tests, and urine analysis [22]. We collected data on the following variables: gender (male/female), age, body mass index, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglyceride, γ-glutamyl transpeptidase, glycemic index, HbA1c, estimated glomerular filtration rate, creatinine, sodium, potassium, salt intake, bone mineral density, body fat percentage, muscle mass, basal metabolic rate, number of teeth, smoking status, physical activity, walking speed (yes/no), sleeping status, and alcohol consumption (every day, sometimes, none). Salt intake was measured by estimating salt intake from spot urine specimens (Tanaka method) [25]. Walking speed and sleep status were binarized as “yes/no”; “yes” corresponded to a walking speed faster than that of an average individual of the same gender at about the same age and when the respondents were well rested, respectively [26]. ## 2.6.2. Eating Speed As in previous studies, a self-reported subjective assessment method of food intake speed was applied [10,27]. Eating speed was evaluated subjectively on a three-point scale: fast, normal, and slow. For the analysis, eating speed was treated by dividing the group into two groups: fast and normal/slow. ## 2.7. Statistical Analysis After confirming the normality of participant data using the Shapiro–Wilk test, continuous data were expressed as means and standard deviations, while categorical data were expressed as numbers (%). Logistic regression analysis (backward stepwise) was used to control for possible confounding variables related to eating speed. Partial regression coefficients for each eating speed outcome were estimated after adjusting for all other variables included in the model. Adjustment items included gender, age, body mass index, HDL cholesterol, LDL cholesterol, triglyceride, γ-glutamyl transpeptidase, HbA1c, estimated glomerular filtration rate, smoking, physical activity, walking speed, sleeping, alcohol consumption, creatinine, sodium, potassium, salt intake, bone mineral density, body fat percentage, muscle mass, basal metabolic rate, and the number of teeth. All statistical analyses were performed using SPSS version 26 (IBM, Armonk, NY, USA). Two-tailed p-values were calculated for all analyses. ## 3.1. Participant Characteristics The participants’ characteristics are summarized in Table 1. Overall, 702 participants were enrolled in the study, and 220 were excluded due to missing data. Ultimately, 481 participants were included in the analysis. Of the participants, 223 ($46.4\%$) were male, and the mean age was 66.7 (SD: 7.4) years. The mean body mass index was 66.7 (SD: 7.4) kg/m2. The mean HDL cholesterol level was 61.7 (SD: 15.1) mg/dL. The mean LDL cholesterol level was 121.8 (SD: 27.4) mg/dL. The mean triglyceride level was 101.9 (SD: 65.3) mg/dL. The mean gamma-glutamyl transpeptidase level was 37.7 (SD: 54.1) IU/L. The mean HbA1c was 6.0 (SD: 7.0). The mean estimated glomerular filtration rate was 69.4 (SD: 13.1) mL/min/1.73 m2. The mean creatinine was 85.9 (SD: 55.9) mL/min. The mean sodium was 119.7 (SD: 56.3) mEq/day. The mean potassium was 54.5 (SD: 30.7) mEq/day. The mean salt intake was 9.5 (SD: 2.1) g/day. The mean bone mineral density was 88.3 (SD: 12.2). The mean body fat percentage was $24.2\%$ (SD: $8.9\%$). The mean muscle mass was $41.2\%$ (SD: $8.5\%$). The mean basal metabolic rate was 1208.3 (SD: 230.8) kcal/day. The mean number of teeth was 23.5 (SD: 7.8). There were 41 ($8.5\%$) smokers. From the questionnaire, 261 ($54.3\%$) participants answered that they do physical exercises on a daily basis, 209 ($43.5\%$) answered that their walking speed was fast, and 350 (72.8) respondents answered that they slept well. Eating speed was fast in 136 ($28.3\%$) patients, normal in 301 ($62.6\%$), and slow in 44 ($9.1\%$). There were 133 ($27.7\%$) participants that drank alcohol every day, 105 ($21.8\%$) who did sometimes, and 243 ($50.5\%$) who never did. ## 3.2. Univariate and Multivariate Logistic Regression Analysis The results of univariate and multivariate logistic regression analysis are shown in Table 2. There were no significant associations in univariate analysis between eating speed and gender (odds ratio [$95\%$ confidence interval]: 1.00 [0.67–1.49]), age (0.99 [0.97–1.02]), HDL cholesterol (0.99 [0.97–1.00]), LDL cholesterol (1.00 [0.99–1.01]), triglyceride (1.00 [1.00–1.00]), gamma-glutamyl transpeptidase (1.00 [1.00–1.01]), estimated glomerular filtration rate (1.00 [0.98–1.01]), creatinine (1.00 [1.00–1.00]), sodium (1.00 [1.00–1.01]), potassium (1.00 [0.99–1.00]), bone mineral density (1.00 [0.99–1.02]), body fat percentage (1.01 [0.99–1.04]), muscle mass (1.02 [1.00–1.04]), basal metabolic rate (1.00 [1.00–1.00]), number of teeth (1.01 [0.98–1.04]), smoking (1.08 [0.53–2.23]), physical activity (0.95 [0.64–1.42]), walking speed (0.89 [0.59–1.32]), or drinking alcohol (1.15 [0.91–1.46]). However, univariate analysis revealed significant correlations between eating speed and body mass index (odds ratio [$95\%$ confidence interval]: 1.07 [1.02–1.13]), HbA1c (1.60 [1.18–2.18]), and salt intake (1.14 [1.04–1.26]). Because this study used the variable reduction method during logistic regression analysis, each variable was used once as an explanatory variable in the multiple analysis, but the following variables were finally extracted as strongly related items. Multivariate logistic regression analysis revealed significant correlations between eating speed and gender (2.15 [1.02–4.53]), HbA1c (1.60 [1.17–2.18]), salt intake (1.11 [1.01–1.22]), muscle mass (1.05 [1.00–1.09]), and sleeping (1.60 [1.03–2.50]). ## 4. Discussion Our major findings in this study are that fast eating might lead to systemic diseases such as type 2 diabetes, renal dysfunction, and hypertension even after adjusting for the number of teeth. A 5-year cohort study analyzing diabetes incidence in 4853 Japanese participants reported a hazard rate of 2.08 for diabetes incidence at fast eating speeds compared with slow eating speeds [7]. In a study of Japanese adults that investigated the relationship between eating speed and poor glycemic control, several reports indicated that fast eating was associated with poor glycemic control, a measure of postprandial blood glucose [28,29]. In addition, fast eating speed may be independently associated with insulin resistance in the Japanese population [30]. Since the results of this study are consistent with previous reports, we believe that eating speed is also associated with glycemic control in healthy older people living in the area we studied. However, because many of the reported studies were based on data from Japan, the results of this study may differ depending on the effect of race. In fact, as a Japanese-specific report, impaired glucose tolerance, which results in postprandial hyperglycemia, is common among young, thin Japanese women with a BMI < 18.5 kg/m2, and is associated with insulin resistance and adipose tissue abnormalities as the cause of such impaired glucose tolerance [31]. Therefore, whether the results of this study can be applied to populations other than the Japanese population should be carefully determined. In basic research, histamine neurons have been reported to be involved in regulating masticatory function, particularly eating speed, in experiments using rats. Although histamine neurons in the brain are often explained as being activated by slower eating speeds to facilitate visceral fat burning via the sympathetic nervous system, the detailed mechanism is still unclear [32]. However, the strength of this study is that we reported results adjusted for masticatory function as a confounding factor. Although a relationship between masticatory function and eating speed has been reported, the masticatory function has not been considered a factor related to eating speed in many studies [19]. Therefore, it is important to note that the results of this study are based on an analysis that considers oral and dental health status. However, whether masticatory function positively or negatively correlates with eating speed is a controversial issue that must be considered. A review of the literature was conducted on the relationship between eating speed and salt intake, but no data were obtained to show a relationship between the two. However, it has been reported that sensitivity to salt increases as the number of chews increases with the inoculation of hard foods [33]. Since mastication speed is generally considered to slow down as the number of chews increases, someone with a slower eating speed may have increased sensitivity to salt intake and may have suppressed salt intake beyond what is necessary [19]. A review has reported that lower salt intake is associated with a lower risk of cardiovascular disease, all-cause mortality, kidney disease, stomach cancer, and osteoporosis [34]. Therefore, while health guidance generally teaches the limitation of salt intake, correction of fast eating may be more effective as early preventive health guidance. Multiple reports on gender differences in eating speed indicate that males are generally faster than females [35]. The presence of gender-related differences in eating speed is consistent with our findings, as eating speed is thought to be dependent on body size and bite size. Regarding muscle mass, slow eating speed has been reported to decrease muscle mass in people with type 2 diabetes [36]. In addition, slow eating has been reported to increase the likelihood of sarcopenia and undernutrition as well as loss of muscle mass, and the results of our study were similar [27,37]. However, more detailed cohort studies are needed to determine the causal relationship between muscle mass and eating speed, whether muscle mass is reduced because of slower eating speed, or whether the eating speed is reduced due to decreased function of masticatory muscle groups and muscle groups related to swallowing caused by reduced muscle mass. The relationship between sleep and eating speed was difficult to logically examine because no previous studies have pointed to a link between the two. One of the concerns in this study was whether oral status was related to the speed of food intake. Whether oral status would increase or decrease the speed of food intake was also difficult to predict. This is because if the number of teeth is high, the person can bite well and thus may eat more slowly; on the other hand, the person may bite more efficiently and thus may eat more quickly. At least for these two conflicting hypotheses, our results suggest that the number of teeth is not related to the food intake speed and that other factors (e.g., swallowing function, cognitive function, etc.) may be involved as determining factors. The central pattern generator (CPG) in the medulla oblongata of the brainstem is believed to contribute to the formation of the motor patterns of masticatory swallowing movements [38]. Because the cerebral cortex is believed to trigger swallowing and regulate the sequential activity of the brainstem, aging may alter the speed of food intake by disrupting this regulation [39]. In fact, it has been reported that patients with amyotrophic lateral sclerosis, a neurological disease, have problems with food intake due to dysfunction of the CPG in dysphagia, which prevents smooth swallowing [40]. Therefore, based on the results of this study, we hypothesized that the factors that influence the speed of food intake can be divided into three categories: background factors (gender, age, and body size), oral status (masticatory function, swallowing function, tongue pressure, oral dryness, oral hygiene, and tongue and lip motor function), and neural functions connecting the brain and the oral cavity, which were not investigated in this study. Mouthful volume may also be a factor that influences the food intake speed as a separate factor from oral function. Past reports indicate that there is a negative correlation between mouthful volume and the number of bites and that the number of bites per mouthful volume decreased as the mouthful volume increased [41]. New studies are needed to investigate the determinants of food intake speed in more detail. Several intervention studies have been conducted to control fast eating speed. The first was a randomized crossover design study in which participants were instructed to consume food under two dietary conditions, with slow eating resulting in significantly lower dietary energy intake [42]. The two conditions were a fast eating group (eat as quickly as possible: take large bites, chew quickly, and refrain from pausing and putting the spoon down between bites) and a slow eating group (eat as slowly as possible: take small bites, chew each bite thoroughly, and pause and put the spoon down between bites) [42]. The second study, which showed an association between fast eating speed and elevated blood glucose levels in a randomized crossover controlled trial on a Japanese female population, used an intervention method that set a time frame (fast group eating the test food in 10 min and slow group in 20 min) [13]. Another randomized controlled trial in a Japanese population also used a time frame intervention method (fast group in 5 min and slow group in 15 min) [43]. Considering these past studies, dental professionals should contribute to the prevention of systemic diseases by providing the following dietary guidance after restoring normal eating function by improving oral and swallowing conditions. The instructional approach to food intake should include chewing with small mouth openings, chewing thoroughly when chewing, and placing the spoon down on the table as often as possible to increase the spacing between food transports. It would be better to instruct them that each meal should take at least 20 min [13]. The key point here is the determination of oral and swallowing status. All previous studies are based on healthy individuals, albeit of different genders. In other words, the benefit of this slow eating is based on the assumption that there are no oral or systemic problems. It is important for dentists and dental hygienists to demonstrate their expertise in making this determination. In the Japanese healthcare system, dental hygienists can provide practical guidance on oral and dental hygiene under the direction of dentists. *In* general, the main purpose of oral and dental hygiene instruction is to prevent dental diseases, and nutritional and dietary guidance are rarely provided so far [44]. Moreover, dental hygienists have rarely provided instruction or guidance that focuses on fast eating. However, considering the potential for widespread systemic effects of fast eating as suggested in the present study, it is necessary to provide guidance on eating too fast, as well as on dietary balance. Eating slowly, or chewing well not only increases saliva production and improves digestive efficiency, but also has social significance, such as the enjoyment of taste [45]. Therefore, such dietary and lifestyle guidance carried out by dental hygienists, along with the restoration of oral function by dentists, could be considered important in contributing to overall systemic healthcare and preventing systemic diseases such as type 2 diabetes, renal dysfunction, and hypertension. In Japan, the collaboration between medicine and dentistry has been strengthened since 2012, with dentists and dental hygienists playing an increasingly active role in both clinical practice and research [46]. It is hoped that studies with even larger sample sizes will become possible in the future and that the factors that make up fast eating speed and its effects on the whole body will be clarified. This study has four main limitations. First, the lack of detailed data on medications and pre-existing treatment status in this study limits adjustment in multivariate analysis. Second, the causal relationship between the relevant items is unknown because this was a cross-sectional study. Third, dietary speed was self-reported, which might lead to low objectivity and reliability. Finally, there was methodological bias such as the examinee and reporting bias. Future studies are needed to prove this causal relationship through a prospective cohort study that incorporates data on oral function. ## 5. Conclusions Fast eating may be associated with overall health and lifestyle. The characteristics of fast eaters, after taking oral and dental status into consideration, tended to increase the risk of type 2 diabetes, renal dysfunction, and hypertension. Dental professionals should provide dietary and lifestyle guidance to fast eaters. Additionally, the number of teeth may not be associated with fast eating.
# Geographical Disparities in Esophageal Cancer Incidence and Mortality in the United States ## Abstract Background: *Our previous* research on neuroendocrine and gastric cancers has shown that patients living in rural areas have worse outcomes than urban patients. This study aimed to investigate the geographic and sociodemographic disparities in esophageal cancer patients. Methods: We conducted a retrospective study on esophageal cancer patients between 1975 and 2016 using the Surveillance, Epidemiology, and End Results database. Both univariate and multivariable analyses were performed to evaluate overall survival (OS) and disease-specific survival (DSS) between patients residing in rural (RA) and urban (MA) areas. Further, we used the National Cancer Database to understand differences in various quality of care metrics based on residence. Results: $$n = 49$$,421 (RA [$12\%$]; MA [$88\%$]). The incidence and mortality rates were consistently higher during the study period in RA. Patients living in RA were more commonly males ($p \leq 0.001$), Caucasian ($p \leq 0.001$), and had adenocarcinoma ($p \leq 0.001$). Multivariable analysis showed that RA had worse OS (HR = 1.08; $p \leq 0.01$) and DSS (HR = 1.07; $p \leq 0.01$). Quality of care was similar, except RA patients were more likely to be treated at a community hospital ($p \leq 0.001$). Conclusions: Our study identified geographic disparities in esophageal cancer incidence and outcomes despite the similar quality of care. Future research is needed to understand and attenuate such disparities. ## 1. Introduction Esophageal cancer is the 8th most common cancer globally, with an age-standardized incidence rate (ASR) of 6.3 per 100,000 persons in 2020 [1]. As of 2022, the lifetime risk of developing esophageal cancer is 1 in 125 men and 1 in 417 women for the US population [2]. While the incidence and mortality trends of esophageal cancer in the US are decreasing, the global trends are reportedly increasing [3]. Age, gender, race, socioeconomic status, and geographical location have been reported to play a role in esophageal cancer incidence and mortality [3]. Males, Blacks people, people of lower socioeconomic status, and patients in low-income areas have been reported to be at a higher risk of developing and dying from esophageal cancer [3,4,5]. In contrast, a study in Brazil found an inverse relationship between esophageal cancer incidence and the level of urbanization [6]. A similar study utilizing the North American Association of Central Cancer Registries found no significant difference between overall cancer incidence rates between urban and rural areas. However, esophageal cancer incidence rates were higher in rural areas in the US [7]. A possible explanation for these disparities may be the difference in the quality of care. It has been well documented that Black patients are more likely to be diagnosed at a later stage and not receive timely definitive treatment resulting in poorer survival compared to Asian and White patients [8,9,10]. Other patient factors such as socioeconomic status, insurance status, and distance required to travel for medical care can influence the quality-of-care [11,12]. Interestingly, Clark et al. found that patients at high-volume academic centers had better outcomes than low-volume community centers [12]. Our group has previously used the Surveillance Epidemiology and End Results (SEER) and the National Cancer Database (NCDB) databases to explore trends and disparities in neuroendocrine [13] and gastric cancers [14] between urban and rural populations in the US. We sought to assess if any such disparities exist for esophageal cancer by analyzing data from the SEER and NCDB databases. ## 2.1. Data Source The data for this retrospective analysis were extracted from the Surveillance, Epidemiology, and End Results Program (SEER) database from 1975 to 2016 and National Cancer Data Base (NCDB) from 2006 to 2017. The SEER database is a National Cancer Institute program that collects cancer-related data from various population-based registries, which cover approximately $47.9\%$ of the US population [15]. The SEER database collects patient demographics, primary tumor site, tumor morphology, stage at diagnosis, course of treatment, insurance status, patient location, vital status, and survival data. The data on cancer rates and mortality are received from the Census Bureau and Nations Center for Health Statistics. The NCDB is a joint effort by the American College of Surgeons and the American Cancer Society to collect data from hospital cancer registries to evaluate cancer trends and treatment patterns [16]. The NCDB captures data from approximately 1500 commission-on cancer-accredited facilities covering nearly $70\%$ of newly diagnosed cancer patients. ## 2.2. Study Population We used the International Classification of Diseases of Oncology, 3rd Edition (ICD-O-3) diagnostic codes to identify and include all esophageal cancer patients from NCDB and SEER databases for our analysis. Patients from all stages (AJCC 6th and 7th editions) were included in the analysis. The residential area of patients was classified as urban or rural based on the Rural-Urban Continuum Code available (RUCC) in the NCDB and SEER databases. The RUCC codes were used to categorize geographical localities into metropolitan and non-metropolitan by the Office of Management and Budget based on population. Consistent with our previous research, we categorized counties as urban if they were considered metropolitan (MA) as per RUCC coding (RUCC 1–3) and counties as rural if they were considered non-metropolitan (RA) as per RUCC coding (RUCC 4–9) [14]. ## 2.3. Statistical Analysis The SEER database was utilized to identify and analyze data on patient demographics such as age, race, sex, insurance (insured, uninsured, and unknown), residence (metro [MA], and rural [RA]), marital status, tumor characteristics (histology, grade, and stage), period of diagnosis (1975–1989, 1990–2000, 2001–2010, and 2011–1016) patient vital status, and disease-specific (DSS) overall survival (OS). The incidence and mortality rates from the various time periods were calculated to analyze esophageal cancer trends and evaluate the difference in trends between the rates and survival outcomes in RA and MA populations. Similar sociodemographic data were collected for patients in the NCDB database, which included patient age, ethnicity (Hispanic and non-Hispanic), race, sex, insurance provider (government, private, and uninsured), county median income (≤USD 50,353 and ≥USD 50,354), residence (MA, and RA), facility at which treated (academic/Integrated, Community, and unknown), distance traveled for care (miles), tumor characteristics (histology, grade, and stage), period of diagnosis (2006–2011 and 2012–2017), OS data, and quality of care indicators such as the number of regional lymph nodes examined (<15 and ≥15), time from diagnosis to start of treatment, adjuvant and neoadjuvant therapy received (yes, and no), chemotherapy received (none, single agent, multiagent, unknown regimen, unknown if chemotherapy was received), surgical margins checked (yes or no), length of inpatient stay, 30-day readmission (planned and unplanned), and 30- and 90-day mortality. Association between the place of residence and various sociodemographic variables, tumor characteristics, and quality of care metrics were assessed using Wilcoxon Rank Sum (continuous variable) and Chi-square tests (categorical variables). The study’s primary goals were to evaluate incidence and mortality trends in the rural and urban population between 1975 and 2016 and to estimate the OS and DSS using univariate and multivariate Cox proportional modeling. The multivariate model adjusted survival for age, sex, stage, grade, year of diagnosis, insurance status, marital status, race, and area of residence. Using the log-rank test, Kaplan Meir survival analysis was used to compare long-term outcomes between urban and rural areas. Incidence rates were calculated for each residence (MA and RA) and decade using the SEER population database. Data regarding RUCC codes were available for 676 and 2718 cases in the SEER and NCDB databases, respectively. These cases were excluded from all analyses. Statistical significance was indicated by $p \leq 0.05.$ All statistical analyses were performed using SAS, version 9.4, statistical software (SAS Institute Inc., Cary, NC, USA). ## 2.4. Reporting Guidelines This study is reported as per Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines for cohort studies (Table S1). ## 3.1. SEER Database A total of 49,421 esophageal cancer patients with RUCC codes were identified in our retrospective analysis of the SEER database between 1975 and 2016. The mean age of the cohort was 65.4 years. Most of the patients were males ($78.6\%$), Caucasian ($75\%$), and had an urban residence ($87.5\%$). A total of 44,048 ($87.9\%$) of the patients died in 41 years follow-up period. Descriptive characteristics of patients residing in an MA ($87.5\%$) and RA ($12.5\%$) are compared and summarized in Table 1. Patients in RA were more likely to be males (RA vs. MA, $82.1\%$ vs. $78.1\%$; Chi-Square test, $p \leq 0.001$), Caucasian ($86.4\%$ vs. $74.2\%$; $p \leq 0.001$), married ($60.5\%$ vs. $55.4\%$; $p \leq 0.001$), and have adenocarcinoma ($64.2\%$ vs. $56.9\%$; $p \leq 0.001$). Although there was a statistically significant difference in patient insurance status and tumor grade at diagnosis between people residing in RA and MA, data were not known for a significant portion of the population for both characteristics. There was no significant difference in patient age, tumor stage, and the number of patient deaths. Chi Square and Wilcoxon Rank Sum tests were performed to compare sociodemographic and clinicopathological variables between urban and rural esophageal cancer patients. All significantly different ($p \leq 0.05$) are highlighted in bold. Esophageal cancer patients residing in an MA had consistently lower age-adjusted incidence rates between 1975 and 2016 than patients with rural residences. The incidence rates in patients from an RA showed an upward trend with a rate of 4.66 cases/100,000 people between 1975 and 1989 to 6.40 cases/100,000 people between 2011 and 2016, whereas the rate in MA was relatively stable with 2.39 cases/100,000 people between 1975 and 1989 to 3.07 cases/100,000 people between 2011 and 2016. Similar to incidence rates, age-adjusted mortality rates were also consistently higher in RA patients. However, unlike incidence rates, mortality rates were relatively stable in both RA and MA patients. Incidence and mortality rates in RA and MA populations are shown in Figure 1 and Table S2. In addition to comparing the trends between rural and urban populations, we performed attributable risk percentage and population attributable risk percent calculations between these two populations. The attributable risk percentage and the population attributable risk percent for esophageal cancer incidence ranged from 30.20 to 61.90 and from 1.39 to 6.98 between 1975 and 2016, respectively. The table with attributable risk percentage and population attributable risk for every year between 1975 and 2016 is presented in supplementary material as Table S3. We performed univariate (Table 2) and multivariable survival analyses (Table 3) for OS and DSS for esophageal cancer patients. On univariate analysis for OS, increasing age (HR [$95\%$ CI], 1.01 [1.01–1.01]; Wald $p \leq 0.001$), African American race (1.37 [1.33–1.40]; $p \leq 0.001$), single (1.27 [1.25–1.30]; $p \leq 0.001$), and uninsured (1.43 [1.32–1.54]; $p \leq 0.001$) patients were associated with poor outcomes. In addition, tumors with squamous cell carcinoma histology (1.30 [1.28–1.33]; $p \leq 0.001$), grade III/IV (1.29 [1.27–1.32]; $p \leq 0.001$), and regional (1.32 [1.29–1.36]; $p \leq 0.001$) and distant (2.76 [2.70–2.83]; $p \leq 0.001$) spread were also associated with poorer outcomes. Patient sex and location of residence were not significant predictors of OS. Similar to OS, worse DSS was associated with patient age (1.01 [1.01–1.01]; $p \leq 0.001$), African American race (1.37 [1.33–1.41]; $p \leq 0.001$), single marital status (1.26 [1.23–1.28]; $p \leq 0.001$), uninsured status (1.44 [1.32–1.56]; $p \leq 0.001$), and tumors with squamous cell carcinoma histology (1.28 [1.25–1.31]; $p \leq 0.001$). Higher grade (II/IV) (1.35 [1.32–1.38]; $p \leq 0.001$), regional (1.48 [1.44–1.52]; $p \leq 0.001$), and distant (3.28 [3.19–3.37]; $p \leq 0.001$) stages were found to be poor indicators on univariate analysis (Table 3). The location of the residence was not associated with either OS or DSS (Figures S1 and S2). Univariate cox proportional modeling for OS and DSS with HR and $95\%$ CI for sociodemographic and clinicopathological variables available in the SEER database are shown. All statistically significant ($p \leq 0.05$) outcomes are highlighted in bold. For multivariable analysis, age, sex, race, marital status, insurance status, tumor histology, grade and stage, residence, and year of diagnosis were used as covariates. Multivariable analysis confirmed the results of univariate analysis for age, race, marital and insurance status, tumor histology, stage and grade, and year of diagnosis as significant prognostic indicators for both OS and DSS. Additionally, the female sex was found to be associated with better OS (0.87 [0.85–0.90]; $p \leq 0.001$) and DSS (0.90 [0.87–0.92]; $p \leq 0.001$). In contrast to univariate analysis, patients residing in RA had a significantly poorer OS (1.07 [1.04–1.10]; $p \leq 0.001$) and DSS (1.08 [1.04–1.11]; $p \leq 0.001$) on multivariable analysis (Table 4; Figure 2). Multivariate cox proportional modeling for OS and DSS with HR and $95\%$ CI for sociodemographic and clinicopathological variables available in the SEER database are shown. All variables collected from the SEER database were used as covariates in multivariate model. All statistically significant outcomes ($p \leq 0.05$) are highlighted in bold. ## 3.2. NCDB To better understand the difference in incidence and mortality rates and survival analyses observed in the SEER data between patients residing in RA and MA, we analyzed the quality-of-care variables available in the NCDB database to try and explain these differences. A total of 72,226 esophageal cancer patients with RUCC codes were identified in our retrospective analysis; 12,930 ($17.9\%$) had a rural residence; and 59,296 ($82.1\%$) of the patients resided in an urban area. Data about treatment facility, time from diagnosis to treatment, type of chemotherapy, sequence of radiation therapy, time from diagnosis to surgery, number of lymph nodes examined, surgical margin status, length of stay, planned or unplanned 30-day readmission, and 30- and 90-day mortality were evaluated as a measure of the quality of care from NCDB for patients diagnosed between 2006 and 2017. We saw a statistically significant difference between RA and MA patients for most of the quality-of-care variables, as shown in Table 4. However, a clinically significant difference was found only for county median income, type of insurance, distance traveled for treatment, and type of treatment facility. Patients in RA were more likely to live in counties with a median income ≤ USD 50,353 ($72.5\%$ vs. $36.9\%$; $p \leq 0.001$) and were insured by a government entity ($61.6\%$ vs. $57.2\%$; $p \leq 0.001$). Furthermore, they traveled further to receive care (Mean in miles [Std. Deviation], 67.1 [130.1] vs. 27.0 [107.4]; Wilcoxon rank Sum test $p \leq 0.001$) and received care at a community facility ($57.3\%$ vs. $43.9\%$; $p \leq 0.001$). Chi Square and Wilcoxon Rank Sum tests were performed to compare quality of care variables between urban and rural esophageal cancer patients. All significantly different ($p \leq 0.05$) are highlighted in bold. ## 4. Discussion In this population-based retrospective analysis of the SEER and NCDB databases, we found that esophageal cancer incidence and mortality rates steadily increased from 1975 to 2016 in both rural and urban areas. Over this period, patients residing in RA consistently had higher incidence and mortality rates. Interestingly, DSS and OS were not associated with residence on univariate analysis. However, on multivariable analysis for DSS and OS, RA patients had an HR of 1.08 (1.04–1.11) and 1.07 (1.04–1.10), respectively. This suggested that other variables and factors may contribute to the differences in survival. To possibly explore these factors, we analyzed differences in variables that reflected the quality of care between RA and MA patients and found that RA patients received a similar quality and type of treatment as MA patients. This suggests that a combination of factors may explain these discrepancies. Our study shows that the age adjusted incidence and mortality rates in both urban and rural populations increased consistently between 1975 and 2016; this is in contrast to the study performed by Ulhenhopp et al. This study shows a downward trend for both the incidence and mortality rates between a similar time period using the SEER database. A possible explanation for this could be that our study included patients in SEER and NCDB for whom RUCC codes were available. We found that patients residing in RA were more likely to be males. Sociodemographic factors have been reported to play a significant role in esophageal cancer incidence, treatment, and survival [17]. Studies have reported a male-to-female incidence ratio of 9:1 for esophageal adenocarcinoma [18,19] and a higher incidence of high-grade disease in males [20]. Differences in hormonal levels of estrogen and insulin, growth factors such as IGF-1, and inflammatory mediators have been proposed as possible explanations for these differences in esophageal and other cancer incidence and survival [18,21]. A study describing costs of care at various stages of treatment for different cancers reported that an initial and end-of-life care in esophageal cancer patients was USD 20,433 and USD 18,760, respectively, one of the highest across various cancers [22]. A study examining colorectal, lung, cervical, and breast cancer trends in the US found that uninsured patients with decreased or no physician contact were less likely to undergo age-appropriate screening for cancer [23]. While insured patients showed better outcomes than uninsured patients, insurance type is also a significant predictor of survival [24]. In our analysis, we found that RA patients were more likely to have a lower income and more likely to be either uninsured or insured by a government agency which could explain the worse survival in the rural population. Quality of care disparities can explain the differences observed across socioeconomic strata. Patients having a lower socioeconomic status are more likely to be victims of these disparities and have poorer outcomes [25]. These disparities may stem from decreased availability of high-quality care or increased difficulty accessing such care. In our study, we found that patients residing in RA were more likely to travel farther and receive care at a community center than their MA counterparts who received care at integrated academic institutions. Although RA patients were more likely to be treated at community centers, they had a similar 30-day unplanned readmission and 30-day mortality as urban patients, but 90-day mortality was higher. This observation was similar to the study reported by Boffa et al. They found that patients treated at affiliate hospitals had better surgical margins, a similar 30-day mortality rate, and a higher 90-day mortality rate [26]. These results are hypothesis generating that immediate peri-operative care always do not translate into long term outcomes in esophageal cancer. Another advantage commonly stated with surgical treatment at academic centers is the improved mortality rates with increased annual hospital and surgeon volumes, as seen in a meta-analysis by Brusselaers et al. [ 27]. The Leapfrog Group, an advocacy organization, suggested a minimum hospital volume and surgeon volume of 20 and seven, respectively, for esophagectomies [28]. While adopting such standards might not decrease the average cost of an esophagectomy, higher hospital and surgeon volumes have decreased complications and length of stay, which are the biggest drivers of cost [12,29,30]. Although no federal mandate exists in the US towards regionalization, there has been a $12.4\%$ decline in the number of centers offering esophagectomy between 2004 and 2012 [31]. This consolidation of esophagectomy centers was associated with fewer patients treated at low-volume centers, improved 90-day mortality rate, lymph node harvest, and decreased length of stay and positive margin rate. While regionalization brings improved outcomes and decreased medical costs, robust structures and strategies must be implemented to decrease the risk of further marginalizing socioeconomically disadvantaged sections of society from accessing quality care. To our knowledge this is the first population-based study investigating the disparities in incidence and mortality trends of esophageal cancer between RA and MA populations using national databases in the US. We also evaluated how sociodemographic variables impact patients’ overall and disease-specific survival. Additionally, we used the NCDB database to identify differences in quality-of-care metrics, which might explain the difference in survival observed between the two populations. However, our study has limitations, including missing and unknown data, most notably for the stage, grade, insurance status and treatment specifics, and positive margin rate. Secondly, selection and misclassification bias may have impacted the study, given its retrospective nature. Although we used previously reported definitions for rural and urban areas, the differing definitions of rurality may cause a misclassification bias [13,14]. ## 5. Conclusions Our SEER-based analysis found significant sociodemographic differences between esophageal cancer patients in RA vs. MA. We found that despite the advances in diagnostic and treatment techniques, the incidence and mortality rates increased between 1975 and 2016. Additionally, the rate of increase and the absolute rates were higher in RA consistently over this period. Multivariable survival analysis showed significantly poor overall and disease-specific survival in RA patients. Although quality of care metrics were similar between the two populations, a larger proportion of the population being males, lower median income, and socioeconomic status, difficulty accessing care, and treatment at community centers amongst rural patients could be some of the possible explanations for the observed disparities in incidence, mortality rates, and survival between the two populations in the US. Our results are consistent with similar studies in other countries and studies in the US evaluating other cancers [4,5,6,13,14]. Our findings suggest future research with more robust datasets are required to understand the underpinnings of the observed disparities. This understanding can be used to develop tailored healthcare policies needed to improve the quality of care for all esophageal cancer patients in the US.
# Effect of Spatiotemporal Parameters on the Gait of Children Aged from 6 to 12 Years in Podiatric Tests: A Cross Sectional Study ## Abstract The use of lower limb tests in the paediatric population is of great importance for diagnostic evaluations. The aim of this study is to understand the relationship between the tests performed on the feet and ankles, covering all of its planes, and the spatiotemporal parameters of children’s gait. Methods: *It is* a cross-sectional observational study. Children aged between 6 and 12 years participated. Measurements were carried out in 2022. An analysis of three tests used to assess the feet and ankles (FPI, the ankle lunge test, and the lunge test), as well as a kinematic analysis of gait using OptoGait as a measurement tool, was performed. Results: The spatiotemporal parameters show how Jack’s *Test is* significant in the propulsion phase in its % parameter, with a p-value of 0.05 and a mean difference of $0.67\%$. Additionally, in the lunge test, we studied the % of midstance in the left foot, with a mean difference between the positive test and the 10 cm test of 10.76 (p value of 0.04). Conclusions: The diagnostic analysis of the functional limitation of the first toe (Jack’s test) is correlated with the spaciotemporal parameter of propulsion, as well as the lunge test, which is also correlated with the midstance phase of gait. ## 1. Introduction The use of lower limb tests in the paediatric population is of great importance for diagnostic evaluation and treatment. The use of these tests to assess foot and ankle functions is a controversial topic, showing a lack of consensus on how foot and ankle functions should be measured, defined, or assessed [1]. Therefore, high-reliability morpho-functional tests of the foot and ankle should be used [2,3] in order to establish an association between the morpho-functional variants [4] and other variables, such as weight [5], laxity [6], and physical activity [7] or gait. A child’s gait can be influenced by many intrinsic factors: limb length, joint range, muscle tone [8], neuromuscular diseases [9], and also, by extrinsic factors: footwear, clothing, or carrying loads, which may change their walking pattern [10]. Measures of spatio-temporal gait parameters are used to identify and diagnose walking difficulties and may also determine the prognosis [11]. Although in many cases we are not able to interrelate the different existing diagnostic tests of the foot, this is of vital importance to facilitate subsequent treatment, so determining this interrelation should be a priority issue in lower extremity examinations. Despite this, fundamental information on joint and foot mechanics, as written by K, Deschamps et al., on typically developing children and children with pathological condition has not yet been provided. Indeed, the joint kinetic profiles that have been reported in the past focus on the major joints of the lower extremity (e.g., the hip, the knee, and the ankle) [12] Therefore, joints distal to the ankle play a key role in energy absorption and generation during gait [13,14]. Technical limitation is the main reason why this biomechanical perspective remains understudied [15,16]. The OptoGait system is more readily accessed and can be used in primary care consultation. It is based on a photoelectric cell and is validated for the assessment of the phases of gait in clinical and research settings [17,18]. The coefficient of variation in method error values was low, ranging from $1.66\%$ to $4.06\%$, and all the parameters presented standard errors of measurement between 2.17 and $5.96\%$, indicating a strong reliability. OptoGait has demonstrated excellent reliability for the variables of step rate, step length, and contact time during treadmill and ground walking, as well as good test–retest reliability in healthy and injured adults [19]. However, very few studies have evaluated the spatiotemporal parameters of gait with this system in children [20,21]. As described above, there is a lack of information on the relationship between foot tests and gait parameters in the pediatric population to guide our clinical practice. The hypothesis was that foot tests could be related to gait analysis. Therefore, the aim of this study is to relate diagnostic tests on the foot, covering all of its planes, with the spatiotemporal parameters of children’s gait. ## 2.1. Ethical Approval The parent and/or legal guardian were provided with information about the study, and a statement attesting to informed consent was signed. The children were fully informed of the procedures involved and gave their consent. All the procedures were in accordance with the ethical standards of the institution, and the experimental protocol was approved by a named institutional of the University of Malaga (CEUMA $\frac{91}{2016}$H) and with the 1964 Helsinki declaration. ## 2.2. Study Design It is a cross-sectional observational study, in which the Strengthening Reporting of Observational Studies in Epidemiology (STROBE) criteria were followed. ## Sample Size Calculation To calculate the sample size, we used Epidat 4.2 software (Epidemiology Service of the General Directorate of Public Health of the Consellería de Sanidade (Xunta de Galicia)), and we used the paper by Montes-Alguacil et al. [ 22] to obtain mean and standard deviations for the main outcome’s variables: the heel contact phase, the flat foot phase, and the propulsion phase. The study was designed to detect changes exceeding 0.8 (high effect size) with a type I error of 0.05 and a type II error of 0.2. This calculation used a necessary sample size of 48 subjects, although, in fact, 50 were recruited to cover any potential missing data. ## 2.3. Participants Children aged between 6 and 12 years participated. Measurements were carried out in 2022. The participants were recruited at the Faculty of Health Science from the University of Malaga (Spain). The inclusion criteria were participants aged between 6 and 12 years old and those not experiencing any foot pain at the time of the assessment. Participants who had any of the following conditions were excluded from the study: recent damage of the lower limbs, congenital structural alterations that affect distal areas of the ankle joint, as well as those cases with pathological flat feet caused by cerebral palsy, surgical treatments in the foot or lower extremities, or affectations of a genetic, neurological, or muscular nature. ## 2.4. Procedure The test was performed by two different clinicians. Both were blinded to themselves and to each other’s results. One performed the 3 tests used to evaluate the foot and ankle (FPI, the ankle lunge test, and Jack’s test), and the other one performed the gait analysis (MMR) using the OptoGait gait cycle measurement tool. ## 2.4.1. Foot Posture Index (FPI) The assessment of the foot posture was carried out by measuring the FPI with barefoot subjects in a relaxed standing position to facilitate the visual and physical inspection. The inter-examiner reliability for the FPI in the paediatric population reached a consistent weighted Kappa value (Kw = 0.86) in a sample of children aged between 5 and 16 years, and the categorization was performed using the criteria by Gijon-Nogueron et al. [ 23,24,25] ## 2.4.2. Ankle Lunge Test The range of ankle dorsiflexion was determined by the lunge test, which is a weight-bearing test of the range of ankle dorsiflexion when the knee is flexed. The participant was stood in a relaxed standing position on a solid, horizontal surface facing a vertical wall. The test foot was parallel with a tape measure secured to the floor with the second toe, centre of the heel, and knee perpendicular to a wall. To promote upright balance during the test, the opposite limb was positioned approximately 1 foot behind the test foot in a comfortable tandem stance, and the subjects placed their hands on the wall. The test involved the participant pushing their knee as far forward over the foot as possible, while keeping the heel on the ground. The maximum angle of advancement of the tibia relative to the vertical was recorded as a measure of ankle dorsiflexion using a digital inclinometer (Smart Tool™) applied to the anterior surface of the tibia. The intra-examiner intraclass correlation coefficients were 0.98, and the inter-examiner reliability reached an excellent value of weighted Kappa (Kw = 0.97) [26]. ## 2.4.3. Jack´s Test In 1953, Jack described one of the first methods to evaluate the first MPJ in a weight-bearing position, and this method is still commonly used today. It is also known as the Hubscher manoeuvre [27]. The utility of the test assumes that restriction during the static manoeuvre is predictive of the functional limitation of this joint during gait. The test involves the examiner manually dorsiflexing the hallux, while the patient stands in a relaxed double-limb stance position. A normal response is for these structures to ‘tighten’ and the medial longitudinal arch to rise. This response is commonly referred to as the ‘windlass mechanism’. A failed test was recorded when the examiner was unable to dorsiflex the hallux from the weightbearing surface without the application of excessive force. The utility of the test assumes that restriction during the static manoeuvre is predictive of functional limitation of this joint during gait. The intra-examiner intraclass correlation coefficients were 0.89. ## 2.4.4. Spaciotemporal Gait by OptoGait System The gait parameters were assessed using the OptoGait® portable photocell system [17,18]. This system provides real-time numerical parameters related to stepping, running, and jumping. Previously, the participants were instructed to walk barefoot, as per there normal walking behavior, for a distance of five meters between two parallel bars. Six to eight strides are sufficient to obtain representative data for unimpaired adults [17], and in our case, ten strides were measured. After three trials, the data were acquired. A highly experienced podiatrist (MMR) (with more than 1000 Optogait® tests examinations performed) controlled the measurement process at all times. The software used was OptoGait v.1.11.1.0. The Optogait system was calibrated and checked for accuracy at all times to provide an exhaustive and reliable measurement of the spatiotemporal phases of the gait cycle. Considering the heel contact phase (Phase 1), this is the time from the initial ground contact (1 LED activated is needed to be considered) to the foot flattening (the number of LEDs activated stays steady ±2 LEDs). Footflat phase (Phase 2) is the time from foot flattening to the initial take-off, and the propulsive phase (Phase 3) is the initial take-off until the end of the motion. ## 2.5. Statistical Analysis The descriptive statistics obtained included measures of central tendency and dispersion and the distribution of percentages. An exploratory analysis including the Kolmogorov–Smirnov test and by examining symmetry and kurtosis was performed to confirm the normality of the distributions. Subsequently, a bivariate analysis of the differences of the means using Student’s t test was applied to evaluate the differences in the gait parameters according to the results of Jack’s test. The differences in the gait parameters were identified by ANOVA according to the four FPI and the three lunge test groups established. The homoscedasticity of the distributions was determined by the Levene test. In addition, the Browne–Forsythe test of robustness was applied, and a post hoc analysis performed using the Bonferroni test. The level of statistical significance was $95\%$ in all the cases, and all the analyses were conducted using SPSS v.23 statistical software (SPSS Inc., Chicago, IL, USA). ## 3. Results The sample in this study was composed of 50 healthy school children, with 29 ($58\%$) girls and 21 ($42\%$) boys aged 6–12 years (mean age: 8.96 years, SD: 1.83). The mean BMIs were 18.45 kg/m2 (SD:3.30) for the girls and 18.74 kg/m2 (SD:3.52) for the boys. The difference between the genders was not statistically significant (t = −0.23; $$p \leq 0.814$$). According to the results observed the Jack´s test showed, for the left foot, 42 negative results for 8 positive results, and for the right foot, 44 negative and 6 positive results. Regarding the lunge test, for the left foot, 12 results were positive (unable to performed the test) and 38 results were negative (20 negative, 5 cm, and 18 negative, 10 cm), and for the right foot, 10 results were positive and 40 results were negative (22 negative, 5 cm, and 18 negative, 10 cm) (Table 1). The spaciotemporal parameters show how the results of Jack’s test are significant in the propulsion phase in its % parameter, with a p-value of 0.05 and a mean difference of 0,$67\%$ (Table 2). If we divide the groups according to age, significant differences are only observed for the same measures as those in the global analysis ($$p \leq 0.038$$ left propulsive phase (%)), with the rest having p-values greater than 0.05 (Figure 1). Regarding the lunge’s tests (Table 3), according to age, significant differences were only observed in the 10-year-old sample in the flat foot phase on the right foot ($$p \leq 0.022$$, with the rest of them having p-values greater than 0.05 (Figure 2). Finally, a significance is observed in the relationship between the gait parameters and the foot posture, following the foot posture index. We found a significance in the step length between the pronated posture and normal posture in the right foot (with a mean difference of 5.51 cm and a p-value of 0.05), as well as in the contact phase (with a mean difference of 0.63 sec and a p-value of 0.05) (Table 4). Regarding the FPI test, according to age, significant differences were only observed in the 8-year-old sample in the flat foot phase on the right foot ($$p \leq 0.009$$, the rest of them having p-values greater than 0.05 (Figure 3). ## 4. Discussion The tests used for the evaluation of the foot can be used to identify a relationship between them and the different phases of the gait cycle, verifying which of them they influence. The objective of this study was to relate the tests performed on the foot and covering all its planes with the spatiotemporal parameters of children’s gait. The gait cycle begins when the foot makes contact with the ground, and it ends when the same foot makes contact with the ground again. The first ray is fundamental both in the full stance phase, in which it will serve as a mobile adapter on the irregularities of the ground, forming an internal longitudinal arch, and in the propulsive phase, whose function will be to become a rigid segment capable of transferring the weight of the body forward [28]. The windlass mechanism [13] is closely related to the propulsion phase, since the correct movement of the first metatarsophalangeal joint, together with the locking of the midtarsal joint makes this gait phase functional [29]. This windlass mechanism creates tension in the plantar aponeurosis, with tensile forces approaching $100\%$ of the body weight. Although it is highly variable, the arch rises rapidly during the late stance phase, and the navicular demonstrates an average rise of 6 mm during late push-off. Depending on the foot model used for the gait analysis, dorsiflexion of the first metatarsophalangeal joint averages around 30–50° during this same period [30]. Jack’s test is a test that has been devised to clinically evaluate the function of this first ray, therefore, a positive test would reflect an inability to dorsiflex this first metatarsophalangeal joint (causing what is known as functional hallux limitus), and therefore, an alteration of this windlass mechanism mentioned above [31]. This is observed in our results, which show an increase in the percentage of the propulsion phase in patients with a positive Jack´s test (difference 0.67 and a p-value of 0.05). Those patients who were unable to perform Jack’s test properly, and in whom, therefore, their windlass mechanism does not work properly, presented a higher percentage in the propulsion phase. Although this is only in the left foot, which leads us to believe that in many cases there are also differences between the stance phases of both feet, which would suggest a more exhaustive, unilateral analysis of gait [14]. However, this contrasts with the opinion of some authors who defend that the presence of limitation of dorsal flexion in this test is not indicative of the limitation of this movement in gait, but that there is a relationship between the pronation of the foot and the limitation of the first metatarsophalangeal joint, which could explain another part of our results [31,32]. However, the study population were patients older than 18 years, and they focused only on the movement of the first metatarsophalangeal joint and not on the entire windlass mechanism. These may be variables that have influenced the conclusion. Another very important structure in the human gait cycle is the Achilles-calcaneal-plantar system. A weight-bearing motion in the first MP joint depends on structures that are not located at the joint itself, but more proximal ones. Among these structures, the Achilles-calcaneal-plantar system and the medial column of the foot are mainly responsible for optimally setting the first MPJ to provide for anteromedial support to the foot during the third rocker or propulsive phase of gait; this requires adequate passive dorsiflexion of the joint, while the hallux is purchasing the ground, and the verticalized first metatarsal is axially loading the hallux sesamoid complex [33]. At this point, the function of the first ray is very important. The first 20° of dorsiflexion are performed thanks to the triceps surae, which raises the heel and makes the first metatarsal initiate this plantar flexion movement. Limited passive dorsiflexion of the first MP joint limits the motion in the sagittal plane, which is necessary for the forward progression of the body during gait [34]. During the second rocker, the tibia must glide forward on the ankle to allow the body’s center of mass to progress from an initial position posterior to the supporting foot to a final position that is anterior to it. A restriction of ankle passive dorsiflexion during the second rocker will increase dorsiflexion moments at the forefoot. Under normal conditions, during the second rocker, the position of the foot must change from pronation to supination; from a relaxed and cushioning conformation to a rigid and propulsive one. If the ankle is unable to provide the necessary passive dorsiflexion for the centre of mass to be placed in front of its vertical plane, one of the ways to achieve these degrees of dorsiflexion is the pronation of the foot [33]. Therefore, our results reinforce all this, showing a relationship between the lunge test (which, as mentioned above, is predictive of dorsiflexion limitation) and an alteration in the full foot contact phases and in the propulsion phase. Regarding these two phases of gait, we observed a decrease in the time of the total contact phase, decreasing the time that the foot is on the floor in those patients tested at 5 cm, as well as an increase in the time of the propulsion phase in those patients tested at 10 cm. The results of the lunge test and Jack’s test could be a starting point to obtain information from the existing literature, which describe that: an increase in the dorsiflexion of the ankle during the end of stance phase produces stress on the Achilles tendon and a decrease in plantarflexion during the propulsion period [35,36]. Finally, another of the tests analysed, which also had an impact on gait, was the analysis of foot posture during pronation or supination according to the FPI, where it is observed that the mid-contact phase influences the result, producing a longer contact time in pronated feet than it does in neutral feet by 0.06 s. These data, although not obtained using the same measuring tool, are similar to those proposed by Ryan Mahaffey et al. [ 37], who proposed the increase in the pronation phase of the midfoot. Caravaggi et al. [ 38] showed significant postural and kinematic alterations in the midtarsal and tarso-metarsal joints of adolescents with planus valgus feet. The objective identification and quantification of planus valgus foot alterations via non-invasive gait-analysis is relevant to improving the diagnosis of this condition and to evaluating the effect of conservative treatments and of surgical corrections by different techniques. Therefore, both our results and those of other authors show that a pronated foot (measured using the FPI test), as well as a limitation of the dorsal flexion of the ankle (measured using the lunge test) and a limitation of the movement of the first metatarsophalangeal joint (measured using Jack’s test) are factors that influence different moments in the gait cycle. In our case, in addition, it is demonstrated that the tests used in the foot are predictive of these presentations of gait and that, therefore, a relationship can be established between these tests and the spatiotemporal parameters of gait, and it should be studied in a different population such as the child population. The clinical implication of this study is related to the exploration of a new option for assessing feet and the gait because clinicians will not always have the tools to evaluate the gait parameters. Therefore, these tests could serve as a proxy of this measure, but they will never be a replacement for it. All of these results should be approached with caution, since they have limitations. The main one is the size of the sample, since it is a convenience sample. This sample was obtained in an exploratory manner, and we had to classify the participants in different subgroups of age, sex, and parameters such as physical activity and weight. In addition, it has the limitation of being a cross-sectional study that always provides punctual data, and not an evolution of the data over time, which is appropriate. Even so, to our knowledge, it is the first study that has begun to relate spatiotemporal parameters with foot tests in children in order to answer one of the great questions of clinicians, which is the interrelation between gait and diagnostic tests. ## 5. Conclusions The diagnostic analysis tests of the functional limitation of the first toe (Jack’s test) is correlated with the spatiotemporal parameter of propulsion, as well as the lunge test, which also correlates with the midstance phase of the gait, which in turn correlates with the posture of the foot, where an increase is observed in the contact time of the pronated feet.
# Impact of Sertraline, Fluoxetine, and Escitalopram on Psychological Distress among United States Adult Outpatients with a Major Depressive Disorder ## Abstract How impactful is the use of Sertraline, Fluoxetine, and Escitalopram monotherapy on psychological distress among adults with depression in the real world? Selective serotonin reuptake inhibitors (SSRIs) are the most commonly prescribed antidepressants. Medical Expenditure Panel Survey (MEPS) longitudinal data files from 1 January 2012 to 31 December 2019 (panel 17–23) were used to assess the effects of Sertraline, Fluoxetine and Escitalopram on psychological distress among adult outpatients diagnosed with a major depressive disorder. Participants aged 20–80 years without comorbidities, who initiated antidepressants only at rounds 2 and 3 of each panel, were included. The impact of the medicines on psychological distress was assessed using changes in Kessler Index (K6) scores, which were measured only in rounds 2 and 4 of each panel. Multinomial logistic regression was conducted using the changes in the K6 scores as a dependent variable. A total of 589 participants were included in the study. Overall, $90.79\%$ of the study participants on monotherapy antidepressants reported improved levels of psychological distress. Fluoxetine had the highest improvement rate of $91.87\%$, followed by Escitalopram ($90.38\%$) and Sertraline ($90.27\%$). The findings on the comparative effectiveness of the three medications were statistically insignificant. Sertraline, Fluoxetine, and Escitalopram were shown to be effective among adult patients suffering from major depressive disorders without comorbid conditions. ## 1. Introduction Approximately 15 million physician office visits with depressive disorders as the primary diagnosis were recorded in 2019 [1]. An estimated 21 million adults and 4.1 million adolescents aged 12 to 17 in the USA in 2017 had at least one major depressive episode, representing $8.4\%$ and $17\%$ of the USA population, respectively [2]. According to the World Health Organization, depression is ranked as the most significant cause of disability worldwide and contributes heavily to the global disease burden [3]. Depression is the major contributing factor to suicide and ischemic heart disease [4]. “According to the Global Burden of Disease study, major depressive disorder was recorded as the mental health disorder with the highest economic burden accounting for 2.7 million disability-adjusted life years in 2016” [5]. In 2018, the economic burden of depression was estimated at USD 326 billion, representing an increase of $37.9\%$ between 2010 and 2018 [6]. “ The Center for Disease Control emphasizes that over the past two decades, the use of antidepressants has experienced tremendous growth, making them one of the most expensive and third most prescribed drugs in the USA” [7]. First-generation antidepressants, such as tricyclic antidepressants and monoamine oxidase inhibitors, used to be the main treatment for depression, but they are no longer preferred in many clinics due to their serious side effects, such as orthostatic hypotension and insomnia [8,9,10]. Second-generation antidepressants, including selective serotonin reuptake inhibitors (SSRIs), serotonin and norepinephrine reuptake inhibitors (SNRIs), and dopamine reuptake inhibitors, have fewer side effects than first-generation antidepressants [11]. Fluoxetine and Sertraline were among the first SSRIs approved for depression treatment in the 1990s, and Escitalopram was introduced in 2003 [12]. Although the different classes of second-generation antidepressants have similar effectiveness on quality of life, they differ in their pharmacokinetics, pharmacodynamics, and side effects, which may impact treatment selection [13]. Fluoxetine has a lower specificity of serotonin transporter (SERT) than other SSRIs, but a better binding specificity than tricyclic antidepressants and monoamine oxidase inhibitors [14,15]. Fluoxetine can lead to weight loss, agitation, and anxiety; *Sertraline is* associated with a higher incidence of diarrhea; and Escitalopram has a higher likelihood than other SSRIs of causing QT prolongation [16,17,18]. In clinical practice, second-generation antidepressants are prescribed for many conditions other than depression, such as anxiety, sleeping disorders, psychosis, and neuropathic pain [19]. “ *Sertraline is* currently approved for major depressive disorder, obsessive-compulsive disorder, panic disorder, post-traumatic stress disorder, seasonal affective disorder, and premenstrual dysphoric disorder” [14]. Escitalopram is also used in the management of generalized anxiety disorder, while *Fluoxetine is* used in the treatment of premenstrual dysphoric disorder [14]. The choice of antidepressants is influenced by drug profiles, physician characteristics, patient characteristics, and other factors such as comorbidities [20,21]. “Psychological distress refers to non-specific symptoms of stress, anxiety, and depression. High levels of psychological distress are indicative of impaired mental health and may reflect common mental disorders, like depressive and anxiety disorders” [22]. Research has shown that individuals with depression often experience high levels of psychological distress in various areas of life, which leads to a decline in physical, emotional, and social functioning [23]. Physical symptoms of depression, such as fatigue and changes in appetite and sleep patterns, can negatively impact an individual’s ability to engage in physical activity and maintain good physical health [23,24]. Emotional symptoms, such as feelings of sadness and hopelessness, can lead to difficulty in maintaining personal relationships and a lack of interest in activities. Social functioning may also be affected, as individuals with depression may withdraw from social interactions and have difficulty in forming and maintaining relationships [23]. In addition to the negative impact of psychological distress, depression also increases the risk of various physical health problems, such as cardiovascular disease, diabetes, and obesity which can be attributed to unhealthy coping mechanisms such as overeating, lack of physical activity, and substance abuse [24,25]. It is important for individuals with depression to receive appropriate treatment and support to improve their overall well-being and functioning. There are widely used survey instruments for measuring psychological distress in people with depression, such as the Patient Health Questionnaire-9 (PHQ-9), the Beck Depression Inventory (BDI), and the Kessler Psychological Distress Scale (K6). “ The PHQ-9 is a self-administered questionnaire that assesses the presence and severity of depressive symptoms over the past two weeks, consisting of nine items rated on a four-point Likert scale” [26]. The BDI is a 21-item self-report inventory that measures the presence and severity of depression symptoms over the past two weeks, assessing symptoms such as sadness, hopelessness, and self-esteem, each rated on a four-point Likert scale [27]. The K6 is a brief, self-administered questionnaire that assesses symptoms of non-specific psychological distress over the past 30 days, consisting of six items rated on a five-point Likert scale. A score of 13 or higher on the K6 is considered to indicate clinically significant psychological distress [28]. The K6 is a reliable and valid measure of psychological distress among patients with depression. It has good test–retest reliability, with a correlation coefficient of 0.8, and strong concurrent validity, as it correlates well with other measures of depression and anxiety and is able to discriminate between patients with depression and those without depression [28,29]. Over $40\%$ of depression patients fail to improve with conventional treatment, which involves using a single antidepressant agent at a prescribed dose and duration [30,31,32,33]. In spite of the considerable amount of data available on the clinical efficacy of second-generation antidepressants, there remains insufficient evidence on the real-world impact of the most widely prescribed second-generation antidepressants on patient-reported outcomes. This study evaluated the effectiveness of the most commonly prescribed antidepressants, Sertraline, Fluoxetine and Escitalopram, on psychological distress among various subgroup populations based on age, race, and sex using a nationally representative sample in the United States. ## 2.1. Data Source The current retrospective longitudinal study was conducted to examine the effectiveness of Sertraline, Fluoxetine, and Escitalopram monotherapy on psychological distress as a patient-reported outcome among the non-institutionalized US population using the Medical Expenditure Panel Survey (MEPS). The MEPS data used in this study spanned the period 1 January 2012 to 31 December 2019 (panel 17 to panel 23) [34]. The MEPS is a nationally representative estimate of health care use, expenditure, sources of payment, health insurance coverage, and demographic characteristics, additionally providing data on respondents’ health status, employment, access to care, and satisfaction with healthcare [34]. “ The National Health Interview Survey (NHIS) uses a stratified, multistage probability cluster sampling design which provides a nationally representative sample of the U.S. civilian, non-institutionalized population” [34]. “ A computer assisted personal interviewing (CAPI) technology is used to collect information about each household member and the information collected for a sampled household is reported by a single household respondent. Verification of patient’s reports are conducted through a survey response from their healthcare providers and contacting the pharmacies where the participants reported of filling their prescribed medicines” [34,35]. The panel design of the survey comprises five rounds of interviews covering two full calendar years (Figure 1). Depression was defined as a major depressive episode that affects mood, behavior, and overall health, causing prolonged feelings of sadness, emptiness, or hopelessness and loss of interest in activities that were once enjoyed [35]. Antidepressant monotherapy was defined as patients taking a single antidepressant agent to treat a major depressive disorder. All respondents who were identified as having depression in the 2012–2019 MEPS database, were aged over 19 years, and taking a single agent of Sertraline, Fluoxetine or Escitalopram, were included in the study. To appreciate the effects of the medicine on changes in depressive symptoms during the study, only participants who started taking antidepressants at round 2 and round 3 of the panel were included in the study. The “purchrd” variable was used to select participants from various rounds of the panel. The rationale was to compare the baseline depressive symptoms of the participants from the time they started taking the medications with their symptoms after they had been taking them for roughly a year (in round 4). This will enable us gain insights into the effects of the medicine on the change in depressive symptoms during the study. Patients who purchased medicine before or at the beginning of rounds 1, 4, and 5 of the panel were excluded from the study. Patients who were taking combination therapy were excluded from the study. Patients who had comorbid conditions were also excluded from the study. Respondents with missing responses on the dependent variable (K6 scores) were excluded from the analysis. ## 2.2. Study Design The MEPS HC medical condition file was used to identify individuals with depression. The MEPS medical condition file contains information on the observation of each self-reported medical condition that a respondent experienced during the data collection year. Medical conditions reported by participants were recorded by interviews and coded to fully specified ICD-10-CM and ICD-9-CM codes. Depression was identified using ICD-9-code 296, 311, and ICD-10-code F32 [34]. Patients taking antidepressants were identified using the prescribed medicines file (Figure 2). The most commonly used antidepressants, Fluoxetine, Escitalopram, and Sertraline, were identified using “rxname” and “rxdrgnam” variables from the prescribed medicines file [34]. The patients’ demographic characteristics were identified from the patient characteristic file. In this study, we included age, race, and gender. ## 2.3. Outcome Measures The effect of the medicines on psychological distress was assessed using the Kessler Index (K6) scores. The Kessler Index (K6) scores measure individuals’ non-specific psychological distress in the past 30 days [28]. The scale consists of six items, each rated on a five-point Likert scale (from “none of the time” to “all of the time”) [28]. Supplementary S1. The longitudinal data files in the MEPS contain K6 scores. These scores are measured in rounds 2 and 4 of a panel and are roughly a year apart [36]. Previously reported cut off-points in the literature were used to stratify K6 scores into no/low psychological distress (0–6), mild–moderate psychological distress (7–12), and severe distress (13–24) [28]. In this study, regarding changes in the baseline K6 score (that is round 2–round 4), 1–24 was identified as improved, whereas a change in the K6 score of 0 was classified as unchanged, and when a change in the baseline K6 score ranged from −1 to −24, it was classified as having declined. ## 2.4. Statistical Analysis Descriptive statistics were used to describe the population according to their socio-demographic characteristics. All statistical values were considered significant at a level of significance of p ≤ 0.05. The dependent variable, namely the difference in K6 scores, was categorized using 1–24 as “improved”, −1–−24 as “declined” and 0 as “unchanged”. A multinomial logistic regression model was built to determine the effect of the independent variables on the above-mentioned dependent variable. Demographic variables such as race, gender, and age were controlled in the regression analysis. Statistical analysis was conducted using STATA software (version 15.1). ## Demographic Characteristics of Study Population Table 1 shows the demographic characteristics of the study population for each antidepressant. Among the three antidepressants used in the analysis, Sertraline was the most utilized medication among the study population ($$n = 251$$, $42.61\%$) followed by Fluoxetine ($$n = 185$$, $31.41\%$). Most of the study population were females ($$n = 417$$), representing $70.5\%$ of the total study sample. Among different races, non-Hispanic whites were the highest users ($$n = 489$$, $83.02\%$) of the three SSRIs, with American Indians being the lowest users ($$n = 9$$, $1.53\%$) of the three SSRIs. Most of the study population was within the 40–59 age group ($$n = 244$$, $38.54\%$). Table 2 shows the percentage of patients on Sertraline, Fluoxetine, and Escitalopram who showed improvement, no change, or decline in Kessler 6 scores. The majority of the patients ($$n = 467$$, $92.48\%$) were in the improved group, regardless of which of the three medications they were taking. Fluoxetine had the highest improvement rate of $94.27\%$, compared with Sertraline, which had an improvement rate of $91.96\%$, and Escitalopram, which had an improvement rate of $91.13\%$. Table 3 shows the multinomial logistic regression results for changes in the Kessler Index scores among patients taking Sertraline, Fluoxetine, or Escitalopram monotherapy. A total of 84 participants with missing responses on the Kessler Index score were excluded, resulting in 505 participants being included in the regression analysis. Participants in the unchanged K6 category were used as references to predict improvement in psychological distress for users on the three SSRIs. Moreover, participants taking Fluoxetine were treated as the reference group among the three medications. Among the various age groups, participants aged between 20 and 39 years were used as the reference group, while non-Hispanic whites were used as the reference for race. In comparison with the participants taking Fluoxetine, the results did not show any statistical difference between participants taking Escitalopram (OR = 0.2823, $95\%$ CI, 0.0209–3.812; $$p \leq 0.34$$) and those taking Sertraline (OR = 0.45, $95\%$ CI, 0.06–3.3249; $$p \leq 0.43$$). ## 4. Discussion Patients with a major depressive disorder usually have deteriorating mental health that affects the physical and social aspects of their lives. The primary aim of this study was to assess the effects of Sertraline, Fluoxetine, and Escitalopram on psychological distress using changes in Kessler Index 6 scores among adult outpatients diagnosed with one major depressive disorder. The study sample was characterized by over $70\%$ women, which corresponds with other studies that show that women are more likely than men to experience more depression. Females are also more likely than men to report to a mental health facility or seek medical attention [37,38]. In addition, the increased prevalence of depression correlates with hormonal changes in women, particularly during puberty, before menstruation, following pregnancy, and at perimenopause, suggesting that female hormonal fluctuations may trigger depression [39,40]. The majority of the study population were non-Hispanic whites. Similar racial/ethnic differences in antidepressant use are observed in the treatment of depression [41]. It has also been reported that factors such as racial/ethnic variation in mental health services and availability, treatment acceptability, and educational factors play a role in the prevalence of depression and antidepressant use among races [42]. The 40–59 age group was the highest population taking antidepressant monotherapy, representing over $38\%$ of the study sample. On the contrary, recent studies have shown that young adults aged 18–29 have a higher prevalence of depression than older adults [43,44]. In part, the COVID-19 pandemic has been identified to have played a major role in the increase in the prevalence of depression among young adults [30,31,32]. Young adults have suffered from higher levels of depression and anxiety than older adults throughout the pandemic [45]. According to the Centers for Disease Control and Prevention’s (CDC) Household Pulse Survey, $36\%$ of 18–29-year-olds had symptoms of depression in early May 2021, compared to $22\%$ of those aged 40–49 and $15\%$ of those aged 50–59 [45]. The descriptive statistics showed that $94.27\%$ of the study participants taking Fluoxetine had experienced an improvement in their psychological distress after one year on the medication, followed by Sertraline ($91.96\%$) and Escitalopram ($91.13\%$). The overall improvement rate of $92.48\%$ among the study sample indicates only that selective serotonin reuptake inhibitor medication effectively improves patient-reported outcomes, specifically psychological distress, over one year of taking the medication. In a similar study, the majority of the participants taking either first- or second-generation antidepressant monotherapy remained in the unchanged category after round 4 [36]. The authors explained that the medications might have elicited desirable responses resulting in patients having controlled depressive symptoms even at the time of the initial measure (round 2 of the panel) of psychological distress [37]. The current study compared the impact of Fluoxetine, Sertraline, and Escitalopram on patient-reported outcomes and psychological distress using changes in the Kessler 6 score. In our comparison with Fluoxetine as a reference drug, there was no statistical difference observed between the effect of Sertraline (OR = 0.45, $95\%$ CI, 0.06–3.3249; $$p \leq 0.43$$), and Escitalopram (OR = 0.2823, $95\%$ CI, 0.0209–3.812; $$p \leq 0.34$$) on psychological distress. Currently, there is insufficient data on evaluating the effectiveness of these commonly prescribed antidepressants using changes in the Kessler 6 score as a patient-reported outcome. A similar study on changes in the Kessler Index 6 score showed no significant difference between patients using monotherapy and those using add-on/switch therapy [36]. However, comparing our results to a meta-analysis involving 24,595 participants in 111 studies on the efficacy and acceptability of 12 antidepressants, Escitalopram, Sertraline, and Fluoxetine were found to have superior efficacy than the SNRIs in the meta-analysis [46]. With Fluoxetine as a reference compound, both Escitalopram and Sertraline had a significantly higher efficacy rate than Fluoxetine. However, they concluded that Sertraline may be preferable because of the balance between its efficacy and its tolerability [46,47]. In these studies, the treatment effect was measured using another instrument variable, changes in the baseline Montgomery–Asberg Depression Rating Scale (MADRS) total score. The strength of this study was that a retrospective longitudinal database was used with a nationally representative sample. Due to the structure of the Medical Expenditure Panel Survey (MEPS), we were able to assess the outcome of the medications on psychological distress over time points approximately one year apart (from round 2 to round 4). This gives adequate time to elicit rich data on the long-term effect of the medications on the participants, which is essential for a chronic disease with a high relapse rate, such as depression. However, there were limitations to the study. This study focused on patients with a major depressive disorder without any comorbidities. This limits the generalizability of the results. The study is susceptible to response bias, as the information is self-reported by respondents and cannot therefore always be considered reliable. Moreover, this study could not adjust for the type and severity of depression, illness duration, side effects, and medication adherence, due to the structure of the MEPS. Additionally, this study could not account for the specific dose and titration of the medication, due to the nature of the MEPS, which does not provide dose-related information on the medications. We assumed that patients were prescribed the standard dose of the medications: Escitalopram 10–20 mg once a day [48], Sertraline 150–200 mg daily [49], and Fluoxetine 20–60 mg per day [50]. A future study could focus on examining the real-world impacts of these most widely prescribed antidepressants together with newly approved antidepressants, taking into account medication adherence, the tolerability of the medications, and the type and severity of depression. Due to insufficient evidence on the real-world impacts of selective serotonin reuptake inhibitors among depressed patients, this study adds to the evidence available to inform clinicians on the effect of the long-term use of selective serotonin reuptake inhibitors on patient-reported outcomes among patients with chronic depression. This study can also serve as a guide for researchers in this area, who can focus on the use of second-generation antidepressant monotherapy and dual-therapy antidepressants among patients with severe depression using real-world data. ## 5. Conclusions Based on the descriptive statistics, all the medications effectively improve the rate of psychological distress among adult patients suffering from major depressive disorders without comorbid conditions. Moreover, no significant difference in the improvement rate of psychological distress for the participants was observed in our comparison of the three selective serotonin reuptake inhibitors. In addition to taking the effectiveness of the medications into account, it is imperative that clinicians consider patients’ preferences and tolerability toward specific antidepressant medications in their prescribing decisions.
# Efficiency and Safety of CyberKnife Robotic Radiosurgery in the Multimodal Management of Patients with Acromegaly ## Abstract ### Simple Summary Radiosurgery as an adjuvant treatment for acromegaly has shown efficacy in endocrine and tumor biochemical control, with an acceptable safety profile; however, the reported endocrine and tumor control rates and safety profile are heterogeneous. Therefore, the aim of the study was to evaluate the results of the efficiency and safety of radiosurgery in a well-characterized cohort of acromegalic patients, in addition to analyzing the prognostic factors associated with disease remission. We found a statistically significant reduction in IGF-1, IFG-1 x ULN, and GH concentrations at one year, and at the end of follow-up; in addition, it was observed that high basal IGF-1 concentrations were predictors of the biochemical absence of remission. We did not observe cases of optic neuritis associated with radiation toxicity or stroke. ### Abstract Objective: To analyze, in a cohort of acromegalic patients, the results of the efficiency and safety of radiosurgery (CyberKnife), as well as the prognostic factors associated with disease remission. Material and methods: Observational, retrospective, longitudinal, and analytical study that included acromegalic patients with persistent biochemical activity after initial medical–surgical treatment, who received treatment with CyberKnife radiosurgery. GH and IGF-1 levels at baseline after one year and at the end of follow-up were evaluated. Results: 57 patients were included, with a median follow-up of four years (IQR, 2–7.2 years). The biochemical remission rate was $45.6\%$, $33.33\%$ achieved biochemical control, and $12.28\%$ attained biochemical cure at the end of follow-up. A progressive and statistically significant decrease was observed in the comparison of the concentrations of IGF-1, IFG-1 x ULN, and baseline GH at one year and at the end of follow-up. Both cavernous sinus invasion and elevated baseline IGF-1 x ULN concentrations were associated with an increased risk of biochemical non-remission. Conclusion: Radiosurgery (CyberKnife) is a safe and effective technique in the adjuvant treatment of GH-producing tumors. Elevated levels of IGF x ULN before radiosurgery and invasion of the cavernous sinus by the tumor could be predictors of biochemical non-remission of acromegaly. ## 1. Introduction Pituitary tumors account for $20\%$ of all intracranial tumors [1]. Functioning adenomas account for more than $50\%$ of pituitary tumors [2] and are associated with clinical syndromes with significant morbidity and mortality and mechanical compression effects on vital structures [3]. Acromegaly is a chronic, deforming disease resulting from an excess production of growth hormone (GH), in most cases caused by a pituitary macroadenoma, which is presented clinically with a generalized acro-growth of soft tissue and bone. Untreated acromegaly has been associated with a greater number of metabolic, cardiovascular, osteoarticular, pulmonary, and neoplastic comorbidities. Patients with active acromegaly have excess mortality compared to the general population, which is associated with neoplastic and cardiovascular causes, and ultimately a progressive reduction in life quality [4,5,6]. Due to the heterogeneity and complexity in the presentation of the clinical picture, and low biochemical control/cure rates with the different medical–surgical strategies, most patients benefit from a multimodal treatment [7]. The initial treatment of acromegaly is surgical, through a transsphenoidal or transcranial approach, aimed in most cases at tumor demassification. In the absence of biochemical control after surgery and structural tumor remnant, adjuvant medical treatment with first- (Octreotide LAR and Lanreotide autogel) and second-generation (Pasireotide) somatostatin analogues is indicated [8]; the observed biochemical control rates of this treatment vary in the different published studies, ranging from 35–$76\%$ for the first-generation analogues, and from $26.9\%$ to $93.3\%$ for Pasireotide [9]. The biochemical control rates reported with dompaminergic agonists and Pegvisomant are of $50\%$ [10] and 58–$97\%$ [9], respectively; however, the use of Pegvisomant implies high costs for health systems, something that could be unsustainable in developing countries. The use of second- and third-line treatments, such as fractionated stereotactic radiotherapy and radiosurgery, have shown efficacy in endocrine and tumor biochemical control, with an acceptable safety profile. However, the rates of endocrine and tumor control and safety profile in their different modalities (LINAC, CyberKnife and GammaKnife) are heterogeneous [11]. Therefore, the aim of the study was to analyze, in a well-characterized cohort of acromegalics, the efficacy and safety of radiosurgery, as well as to see which prognostic factors were associated with disease remission. ## 2. Materials and Methods An observational, retrospective, longitudinal, and analytical study was conducted in which acromegaly patients from the clinic of the Hospital de Especialidades, Centro Médico Nacional Siglo XXI, in Mexico City, Mexico, who received medical care during the period between 2010–2020, were included. The present study was approved by the local ethics and research committee (Registry identifier: R-2018-3601-149) and was consistent with the ethical guidelines of the 1975 Helsinki Declaration and the Mexican General Health Law on Research for Health Studies. The acromegaly clinic was established in 2000 and currently has more than 600 patients who receive a uniform follow-up, according to established protocols with neuroendocrinological, neuro-ophthalmological and neurosurgical care. All patients, except known diabetics, underwent an oral glucose tolerance test, during which both GH and glucose were measured at baseline and at 30, 60, 90, and 120 min after intake of a 75 g glucose load. Additionally, according to our standardized protocol, insulin-like growth factor 1 (IGF-1), as well as morning cortisol, thyroid-stimulating hormone (TSH), free T4, prolactin (PRL), luteinizing hormone (LH), follicle stimulating hormone (FSH), and testosterone or estradiol were measured in the initial blood sample [12]. Patients received multimodal treatment including transsphenoidal or transcranial surgery, medical treatment with first generation somatostatin analogues (SSA) (Lanreotide autogel 120 mg deep subcutaneous application every 28 days and Octreotide LAR 20 mg intramuscular application every 28 days) and dopaminergic agonists (DA) (Cabergoline 1.5 to 3 mg orally weekly), fractionated stereotactic radiotherapy, and/or radiosurgery. The sample was obtained by non-probabilistic sampling of consecutive cases. The collection of sociodemographic data, medical history, and laboratory and imaging data was carried out through a review of the patients’ electronic records. The inclusion criteria were patients with biochemical activity of acromegaly, of either sex, older than 17 years, and if they were candidates for radiosurgery treatment according to the current guidelines at the time of patient assessment [13,14]. The selection criteria for referral to radiosurgery were persistent biochemical activity (GH > 1 ng/mL and/or an IGF x ULN > 1.2) after surgical and medical treatment, tumor remnant < 30 mm, and distance from the tumor remnant to the optic chiasm > 3 mm [14,15]. The exclusion criteria were patients without complete data in their electronic files regarding biochemical and imaging outcomes related to the disease and patients who received other types of pituitary/cranial radiation therapy. The primary outcome to be assessed was a biochemical remission of acromegaly at the end of the follow-up as a dichotomous nominal variable. Biochemical activity was defined as the presence of GH > 1 ng/mL and/or IGF x ULN >1.2, biochemical remission after radiosurgery as GH ≤ 1 and IGF x ULN 1.2 without medical treatment, and post-surgery biochemical control that required medical treatment with first-generation somatostatin analogues and that achieved the control goals. The IGF-1 X Upper Limit Normal (IGF-1 X ULN) value was obtained through the quotient of the IGF-1 obtained from the patient at the time of evaluation and the IGF-1 standardized for age and gender [16]. Tumor volume was calculated using the Di Chiro–Nelson method [17]. The invasion of the cavernous sinus was evaluated according to the Knosp classification. Local tumor control (LC) was defined as the containment and/or non-growth of the tumor remnant. Baseline variables were considered as those measured immediately before radiosurgery. Delay in radiosurgery time was defined as the time between the last surgery and the application of radiosurgery. Delay in diagnosis was defined as the time between the appearance of the first symptoms and the biochemical and imaging diagnosis of acromegaly. Panhypopituitarism was defined as the presence of three or more affected hypothalamic–pituitary axes. Diabetes and prediabetes were defined according to the American Diabetes *Association criteria* [18]. Hypertension was considered if the systolic blood pressure reading exceeded 140 mmHg or the diastolic was above 90 mmHg. Hypercholesterolemia and hypertriglyceridemia were defined as when values exceeded 200 and 150 mg/dL, respectively. Central hypocortisolism was defined by a cortisol concentration < 5 μg/dL at 7:00 h. Central hypothyroidism was diagnosed when free T4 was below < 0.6 ng/dL, along with low or inappropriately normal TSH. Central hypogonadism was defined by total testosterone < 250 ng/dL or estradiol < 20 pg/mL accompanied oligo- or amenorrhea, along with low or inappropriately normal serum LH and FSH. ## 2.1. Treatment Parameters Radiosurgery was administered using a CyberKnife M6 platform, with the Multi-Plan system, to develop the planning treatments (Accuray Incorporated Sunnyvale, Sunnyvale, CA, USA) for all treatments in Mexico City, the Oncology Hospital at the National Medical Center. After a patient was accepted to be treated, the IMR and CT for planning were obtained. The median radiation dose was of 23.5 Gy (range 22–25 Gy), delivered in a single day, or a maximum of five days, in 5 Gy to 22 Gy per fraction. The individual Radiosurgery radiation protocol was decided by the radiation oncologist and neurosurgeon, based on the availability of appointments and the specific circumstances of each patient. For instance, in patients living out of town, the Institution provided accommodation for the duration of their treatments. Different organs at risk were carefully protected and all passed the Normal Tissue Constraints, according to the R.D. Timmerman charts. Medical treatments with SSA or DA were suspended at least one month before and during radiotherapy [19,20]. ## 2.2. Hormonal Measurements Assays for the measurement of GH and IGF-1 have changed throughout the follow-up of the cohort. Since 2007 to date, hormonal measurements were carried out using the same commercially available immunoassays. GH was measured by means of the Immulite, 2-site chemiluminescent assay (DiaSorin–Liaison, Saluggia, Italy), which has a detection limit of 0.009 ng/mL and an intra-and-interassay coefficient of variations of $2.5\%$ and $5.8\%$, respectively. The International Reference Preparation (IRP) used in this GH assay was that of the World Health Organization (WHO), second $\frac{95}{574.}$ IGF-1 was measured by means of a 2-site chemiluminescent assay (DiaSorin–Liaison). The IRP in these IGF-1 assays was the WHO second $\frac{02}{254.}$ We established our own age-adjusted normative IGF-1 data, analyzing serum samples from 400 healthy adults, with an age range of 18 to 80 years, as previously described [4]. The hormonal assays used in prior years were: before 1999: RIA (LD 0.7) for GH and IRMA for IGF-1; 2000–2007: IMMULITE (LD 0.01) for GH and DIAGNOSTIC SYSTEM LAB for IGF-1. ## 2.3. Statistical Analysis Descriptive and inferential statistics were used for data analysis, taking into account measures of central tendency and dispersion, according to the distribution of the variables. The Shapiro–Wilk test was used to determine the normality of the quantitative variables’ distribution. For the comparison of variables in independent groups, frequencies and proportions, Pearson’s Xi2 test, or Fisher’s exact test were used according to the expected value; and for quantitative variables, Student’s t test or Mann–Whitney U test were used, according to the type of distribution. For the comparison of variables between three dependent groups, Cochran’s Q test was used for qualitative variables, while for the comparison of quantitative variables, the Friedman’s test was used. A comparative analysis of the baseline characteristics (before radiosurgery) between the groups of patients, with remission and without biochemical remission (at the last follow-up), was performed. Subsequently, through a crude and adjusted Cox Proportional Hazards Regression Analysis, we estimated the magnitude of association of the following variables before radiosurgery: age, gender, IGF-1 x ULN, GH, tumor size, and invasion of the cavernous sinus (grade 1–4 of Knosp’ classification), taking no biochemical remission of the disease at the last of follow-up as the outcome. A two-sided p value was used for the in-between group difference with respect to the primary outcome. A p value of $p \leq 0.05$ was considered statistically significant. The statistical software used was the Stata SE software version 16 (StataCorp, College Station, TX, USA). ## 3.1. Baseline Characteristics Of 265 patients with biochemical activity of acromegaly, 57 patients met the criteria for radiosurgery. Of the 57 patients analyzed, the mean age at diagnosis and at the time of radiosurgery was 47.1 ± 13.4 years and 54.5 ± 12.3 years, respectively, with a female predominance of $56.1\%$; $76.3\%$ of patients had macroadenoma. The median follow-up from radiosurgery treatment was four years (IQR, 2–7.2 years). The delay in diagnosis of acromegaly in the studied cut-off was of five years (IQR, 4–8 years); all patients underwent transsphenoidal resection for tumor demassification and subsequently received radiosurgery as adjuvant treatment. The median delay time to radiosurgery was 38 months (IQR, 21–61 months). Median IGF-1 was 595.3 ng/mL ± 274.9 ng/mL and median GH was 5.66 ng/mL (IQR 2.5–18.3) before radiosurgery. The proportion of patients with tumoral invasion to the cavernous sinus was $78.95\%$. Pituitary hormone deficiencies and other patient characteristics are shown in Table 1. Of the patients referred for radiosurgery treatment, three patients ($5.26\%$) were treated with the hypofractionated modality and 54 patients ($94.74\%$) with a single dose; no statistically significant differences were observed in the biochemical remission rate after radiosurgery when comparing single dose vs. hypofractionated. Regarding the characteristics of radiosurgery treatment, of the total number of patients included in the study [57], it was only possible to obtain information on 37 patients; these results are presented in Table S1 of the Supplementary Materials. ## 3.2. Endocrine Outcomes The biochemical remission rate was $45.6\%$ (26 patients); of which $33.33\%$ (19 patients) achieved biochemical control and $12.28\%$ (seven patients) achieved biochemical cure. A progressive and statistically significant decrease was found in the comparison of IGF-1, IFG-1 x ULN, and basal GH concentrations at one year and at the end of follow-up. Likewise, statistically significant increases were observed in the percentage of patients who reached the control goals at the different evaluation times (Table 2). After radiosurgery, a significant reduction was observed in the percentage of patients using pharmacological therapy (Table 2). The LC tumor rate obtained in the group of patients evaluated was $100\%$ (57 patients). Medical treatment after radiosurgery was characterized by an increase in the number of patients with no treatment at the end of follow-up ($1.75\%$ at baseline vs. 15.79 at the end of follow-up); $56.14\%$ of patients were treated with somatostatin analogues (Octreotide LAR or Lanreotide autogel) before radiosurgery, a proportion that decreased to $49.12\%$ at the end of follow-up. $36.84\%$ of patients received treatment with somatostatin analogues (Octreotide LAR or Lanreotide autogel) plus cabergoline, a proportion that decreased to $28.07\%$ at the end of follow-up. Finally, $5.26\%$ received treatment with carbegoline alone, a proportion that increased to $7\%$ at the end of follow-up. ## 3.3. Radiosurgery Safety Profile Regarding hypothalamic-pituitary hormonal deficiencies after radiosurgery, it was observed that the percentage of hypocortisolism, hypothyroidism, and hypogonadism increased significantly throughout follow-up (Table 2). The reported rate of panhypopituitarism was $1.75\%$ (1 patient) at baseline and $24.56\%$ (14 patients) at the end of follow-up ($p \leq 0.001$). During follow-up after radiosurgery, two patients ($3.5\%$) presented central nervous system tumors of the meningioma type. No cases of optic neuritis associated with radiation toxicity were observed. There were no cases of stroke. A sub-analysis was carried out in which the 37 patients who had the radiosurgery parameters were included; in this analysis, the radiosurgery parameters were contrasted against the presence or absence of new endocrine deficiencies of the hypothalamic-pituitary axis at the end of follow-up, in which no statistically significant differences were observed. These results are presented in Table S2 of the Supplementary Materials. ## 3.4. Bivariate Analysis between Active and Biochemical Control Groups In the bivariate analysis between the groups of patients without and with biochemical remission of acromegaly, no significant differences were observed between the baseline characteristics of the patients (Table 3). ## 3.5. Multivariate Analysis In a Cox Proportional Hazards Regression Analysis, only the baseline IGF-1 x ULN concentration above range was shown to be a risk factor (HR of 1.33; $95\%$ CI 1.01–1.88) for no biochemical remission. Invasion of the cavernous sinus by the growth hormone-producing tumor (grade 1–4 of Knosp’ classification) had a 2.53 ($95\%$ CI 0.92–6.97) for no biochemical remission. The rest of the variables had no statistically significant association with this lack of biochemical control (see Table 4). ## 4. Discussion Currently, the treatment of acromegaly contemplates a multimodal approach, which initially includes surgical and medical treatment (with SSA and DA) [13]. However, despite the existence of these therapeutic tools, high persistence rates of biochemical activity have been reported, so that in such scenarios the use of high-cost drugs, such as Pasireotide and Pegvisomant, are suggested [21,22]. Unfortunately, in low-income countries, these resources are not available in public health institutions, which has led to the use of third-line therapeutic alternatives such as fractionated stereotactic radiotherapy and radiosurgery, whose results have been variable in relation to local tumor control, safety profile, and methodology in the measurement of outcomes [23]. Therefore, the aim of this study was to establish the efficiency and safety of radiosurgery (CyberKnife) as an adjuvant treatment in the multimodal approach of patients diagnosed with acromegaly, as well as to determine which factors were associated with the biochemical persistence of the disease. In the cohort evaluated, a biochemical remission rate of $12.2\%$ was observed, $33.3\%$ of patients were in biochemical control with medical treatment, and $54.4\%$ persisted with biochemical activity; these data are similar to those reported by Iwata H. et al., with endocrine remission in $17.3\%$ ($95\%$ CI 7.02–27.58) of their cases, according to the curtain criteria [24]. Similarly, Ehret et al. found a biochemical remission rate in $18\%$ of their patients, biochemical control with medical treatment in $48\%$, and $34\%$ remaining active [25]. In our study, we obtained percentage reduction rates of both total IGF-1 and IGF-1 x ULN of $70.15\%$ and $65.5\%$, respectively, similar to those found by Ehret et al., who observed a reduction of total IGF-1 and IGF-1 x ULN at the end of their follow-up of $48.5\%$ and $48.5\%$, respectively [25]. Our LC rate was $100\%$ at the last visit, equal to that reported by Ehret et al. [ 25] and similar to Iwata with $82.5\%$ ($95\%$ CI 72.17–$92.83\%$) [26]. Roberts, BK. et al. analyzed a retrospective series of nine patients where they found that $44.4\%$ achieved biochemical remission, which was defined as a normalization of IGF-1 concentrations, at the end of follow-up, and $44.4\%$ had persistence of biochemical activity; the mean follow-up was 17.8 months [27]. Sala E. et al. conducted a study of 22 patients where an IGF-1 normalization rate of $31.5\%$ was reported after CyberKnife radiosurgery treatment at six months follow-up, while $54.5\%$ of patients remained with active disease; the study remission rate at 50 months follow-up was $50\%$ [28]. Such results cited above are consistent with the effectiveness reported in our study, where $33.3\%$ achieved an IGF-1 < 1.2 × ULN and $45.6\%$ achieved endocrine remission at the last visit (GH < 1 ng/mL AND IGF-1 < 1.2 × ULN). Singh, R. et al. performed a meta-analysis involving a total of 1533 patients with acromegaly treated with radiosurgery in the modalities (LINAC, CyberKnife, and GammaKnife), in which they found endocrine remission and endocrine control rates of $43.2\%$ ($95\%$ CI 31.7–$54.6\%$) and $55\%$ ($95\%$ CI 27.6–$82.4\%$), respectively, at five years of follow-up. The estimated local control rate at 10 years was $92.8\%$ ($95\%$ CI 83–$100\%$) [23]. In relation to the safety of radiosurgery found in our study at the end of follow-up, an increase in the rates of hypothyroidism of $28\%$ and hypocortisolism of $26\%$ was obtained; panhypopituitarism presented an increase of $22.8\%$. These data differ from those obtained by Eheret et al., who found an increase in the rates of hormone deficiencies in relation to hypogonadism of $4\%$, of hypocortisolism of $4\%$, and of hypothyroidism of $6\%$ after radiosurgery at the end of follow-up, rates of hypothalamic-pituitary deficiencies [25]. Salas et al. reported an increased rate of hypothalamic-pituitary hormone deficiencies of $22.8\%$ at the end of follow-up (5 years) that affected at least one hormonal axis [28]. The above were lower than those found in our study. The results of the meta-analysis by Singh R. et al., which included different radiosurgery modalities for the treatment of acromegaly, estimated an overall rate of new hypothalamo–hypophysiary deficiencies of $26.8\%$ ($95\%$ CI 16.8–$36.7\%$) [23]. In relation to hypocortisolism, in our clinical practice, we used a cut-off point of <5 mcg/dL accompanied by symptoms and signs of adrenal insufficiency. Patients in whom there was doubt about the diagnosis were referred for induced hypoglycemia testing. However, most patients had cortisol levels between 3 and 15 mcg/dL and the diagnosis can only be made with stimulation tests such as the Synacthen test, which we did not have in our center. Therefore, it is likely that the prevalence of hypocortisolism in our population has been underestimated. In relation to optical radiation toxicity and cerebral vascular events, our results are in agreement with other studies such as that of Roberts B. et al. [ 27] and Iwata H. et al., [ 26] who reported absences of this complication in their cohorts. However, the rates of radiosurgery-associated visual toxicity reported in its different modalities (LINAC, Gamma, CyberKnife) are variable, ranging from $0\%$ to $5\%$, with a pooled rate $2.7\%$ ($95\%$ CI 1.3–$4.2\%$) [23]. Regarding potential predictors for the absence of biochemical remission following radiosurgery, we found a higher probability of no biochemical remission when the baseline IGF1-1 x ULN value was elevated, with an adjusted HR of 1.33 ($95\%$ CI 1.01–1.88). Similarly, Ehret F. et al. found that elevated pretreatment IGF-1 x ULN values were associated with a lower likelihood of biochemical remission of acromegaly [25]. These data are also congruent with those obtained in several studies that evaluated the association between elevated IGF-1 x ULN and total IGF-1 levels before surgical treatment and/or fractionated stereotactic radiotherapy, finding that elevated baseline concentrations were predictive of a lack of biochemical remission [25,29,30,31]. On the other hand, the presence of tumoral invasion into the cavernous sinus (grade 1–4 of Knosp’ classification) showed a tendency to statistical significance with HR of 2.53 (0.92–6.97) as a risk factor for no biochemical remission, a finding previously reported in invasive adenomas that have a low probability of cure and/or remission [30]. All of the above is a consequence of the difficulty in the surgical dissection of somatotroph cells in the cavernous sinus. CyberKnife is a relatively new technology for frameless stereotactic radiosurgery, in which a mobile linear accelerator is mounted on a robotic arm with an image-guided robotic system. Patients are immobilized in a thermoplastic mask and radiation doses can be delivered in single or multiple fractions with a target accuracy of 0.5 to 1 mm, similar to that achieved with frame-based stereotactic radiosurgery [32]. Radiosurgery in its CyberKnife modality is an adjuvant strategy to surgery and a medical treatment in acromegaly with an acceptable effectiveness profile where, according to the series studied, biochemical control ranges from 17–$65.4\%$, with optimal tumor control rates corresponding to ranges of 96–$100\%$ [32]. Radiosurgery shows a variable safety profile, in relation to hypothalamic-pituitary deficiencies ranging from 7.8–$57\%$ of hypopituitarism and visual deficit rates from 0–$11.1\%$ [32]; the variability in its safety makes it necessary to carry out studies to establish it as a treatment tool that can be widely used in the management of the disease. The weaknesses of our study are firstly related to the sample size, although the study population is representative of the patients under follow-up in our clinic; there was also lack of information regarding treatment parameters with radiosurgery, which would be important for the evaluation of new hormonal deficits, especially the dose administered to the pituitary stalk; and the median follow-up after radiotherapy was short, so that the efficacy and side effects could be underestimated, indicating that long-term studies are required to evaluate the efficacy and safety outcomes of CyberKnife radiosurgery. Its strengths are that data are presented from a cohort of a well-characterized rare disease, with a pre-established diagnostic and therapeutic protocol from the beginning of the acromegaly clinic, reducing the probability of some biases. ## 5. Conclusions The results of our study suggest that radiosurgery (CyberKnife modality) is a safe and effective technique in the adjuvant treatment of GH-producing tumors. Additionally, elevated pre-radiosurgery IGF x ULN levels could be a predictor of a lack of biochemical remission of acromegaly.
# Genetic and Probiotic Characteristics of Urolithin A Producing Enterococcus faecium FUA027 ## Abstract Enterococcus faecium FUA027 transforms ellagic acid (EA) to urolithin A (UA), which makes it a potential application in the preparation of UA by industrial fermentation. Here, the genetic and probiotic characteristics of E. faecium FUA027 were evaluated through whole-genome sequence analysis and phenotypic assays. The chromosome size of this strain was 2,718,096 bp, with a GC content of $38.27\%$. The whole-genome analysis revealed that the genome contained 18 antibiotic resistance genes and seven putative virulence factor genes. E. faecium FUA027 does not contain plasmids and mobile genetic elements (MGEs), and so the transmissibility of antibiotic resistance genes or putative virulence factors should not occur. Phenotypic testing further indicated that E. faecium FUA027 is sensitive to clinically relevant antibiotics. In addition, this bacterium exhibited no hemolytic activity, no biogenic amine production, and could significantly inhibit the growth of the quality control strain. In vitro viability was >$60\%$ in all simulated gastrointestinal environments, with good antioxidant activity. The study results suggest that E. faecium FUA027 has the potential to be used in industrial fermentation for the production of urolithin A. ## 1. Introduction Ellagitannins (ETs), the metabolic precursor of urolithins, can be hydrolyzed to ellagic acid (EA), which is subsequently metabolized by gut microorganisms to urolithins [1]. Among all types of those urolithins, urolithin A (UA) exhibited several potentially positive bioactivities, such as restoring muscle function [2], and antiobesity [3], antioxidant [4], anti-inflammation, and anticancer activities [5]. An increasing amount of the literature has recently focused on the impact of the natural compound UA on health, disease, and aging [6]. Numerous studies have shown that different urolithin metabotypes (UMs) produce significantly different amounts and types of urolithins [7]. The gut microflora in more than $40\%$ of middle-aged and elderly people cannot metabolize EA to UA [8]. Cortés et al. found that the percentage of the UM-A population declines when the intestinal flora changes with age [9]. Given the influence of intestinal flora on UA formation [10], screening strains responsible for metabolizing EA to produce UA is of interest. Currently, little is known about the species of gut bacteria involved in EA conversion to UA. Strains found to metabolize EA to produce UA include *Bifidobacterium pseudocatenulatum* INIA P815 [11], *Streptococcus thermophilus* FUA329 [12], *Lactococcus garvieae* FUA009 [13], and *Enterococcus faecium* FUA027 [14]. S. thermophilus FUA329 was isolated from human milk. L. garvieae FUA009 and E. faecium FUA027 were screened from fecal samples. These bacteria have the potential to be developed as probiotics for the in vitro biotransformation of EA to produce UA, or for industrial fermentation to produce UA [15]. Our previous studies have proven that E. faecium FUA027, which was isolated from human fecal samples, metabolizes EA to UA by detecting UA from the fermentation broth of the strain through high-performance liquid chromatography (HPLC) and liquid chromatography tandem mass spectrometry (LC-MS/MS). The highest yield of UA produced by E. faecium FUA027 was 10.80 μM, thereby making this strain a promising candidate for development as a probiotic [14]. The safety and probiotic properties of the strain to be used as probiotics must be evaluated [16]. In this study, whole-genome sequence information analysis and phenotypic assays were used in combination to assess antibiotic resistance, metabolite toxicity, and survival under simulated gastrointestinal conditions. The safety of E. faecium FUA027 and its potential for use in the preparation of UA by industrial fermentation were confirmed. ## 2.1. Bacterial Strain and Growth Conditions E. faecium FUA027 was preserved in the China General Microbiological Culture Collection Center (CGMCC) under the accession number CGMCC No. 24964. All FUA027 strains, unless otherwise noted, were cultivated in Anaerobe Basal Broth (ABB) medium and incubated under anaerobic conditions consisting of N2/H2/CO2 (80:10:10, v:v:v) at 37 °C for 24 h. Staphylococcus aureus ATCC 12600, *Escherichia coli* ATCC 25922, Yeast ATCC 24060, Aspergillus niger ATCC 6273, and *Lactobacillus plantarum* ATCC 4008 strains were used partly for inhibition experiments and partly as control strains in the experiments. S. aureus and E. coli were cultured at 37 °C in Luria–Bertani broth for 24 h. Yeast and A. niger were cultured on potato dextrose agar medium at 37 °C for 48 h. L. plantarum and S. thermophilus were cultivated in Man Rogosa Sharpe broth at 37 °C for 48 h. ## 2.2. Whole-Genome Sequencing The genomic DNA was extracted from the E. faecium FUA027 culture grown in ABB by using a bacterial DNA extraction kit from Sangon, Shanghai, Co. Ltd. (Shanghai, China). For the DNA sample preparations, 1 µg DNA per sample was used as the input material. Sequencing libraries were created using the NEBNext® Ultra™ DNA Library Prep Kit for Illumina (New England Biolabs, Ipswich, MA, USA) according to the manufacturer’s instructions. In brief, the DNA sample was sonicated to obtain 350-bp fragments. The DNA fragments were end-polished, A-tailed, and ligated with the full-length adaptor for Illumina sequencing with further PCR amplification. Finally, the AMPure XP system purified the PCR products, and the size distribution of the libraries was analyzed using the Agilent 2100 Bioanalyzer and quantified using real-time PCR. The whole genome of FUA027 was sequenced using the Nanopore PromethION platform and Illumina NovaSeq PE150 at the Beijing Novogene Bioinformatics Technology Co., Ltd. (Beijing, China). ## 2.3. Genome Assembly and Annotation The trimmed data for the FUA027 genome were combined with PE150 and *Nanopore data* and assembled using SMRT Link v5.0.1 software (https://www.pacb.com/support/software-downloads/, accessed on 15 October 2022). The quality of the genome assembly was validated using QUAST ver. 5.0.2. The final assembly was annotated using the NCBI Prokaryotic Genome Annotation Pipeline (http://www.ncbi.nlm.nih.gov/genome/annotation_prok/, accessed on 15 October 2022) [17]. We used Gene Ontology (GO), the Kyoto Encyclopedia of Genes and Genomes (KEGG), Clusters of Orthologous Groups (COG), the Non-Redundant Protein Database, the Transporter Classification Database, and Swiss-Prot to predict gene function. ## 2.4.1. Identifying Safety-Related Genes from the FUA027 Genome Bacterial virulence factors were identified by referring to the virulence factor database updated in 2019 (VFDB, http://www.mgc.ac.cn/VFs/, accessed on 11 October 2022) [18]. Protein sequences with >$50\%$ similarity in the extraction comparison results were identified as virulence genes. Antimicrobial resistance determinant identification was performed using the ABRicate program (https://github.com/tseemann/abricate, accessed on 11 October 2022) based on the ResFinder database (http://genomicepidemiology.org/, accessed on 11 October 2022) [19]. Antibiotic resistance genes of E. faecium FUA027 were identified using the comprehensive antibiotic resistance database (CARD, https://card.mcmaster.ca, accessed on 11 October 2022) [20]. ## 2.4.2. Antibiotic Susceptibility Testing Susceptibility testing was performed through disk diffusion according to EUCAST recommendations [21]. The strain FUA027 was purified, inoculated into 20 mL of ABB liquid medium, and incubated anaerobically at 37 °C for 24 h. Bacterial colonies were counted, and the concentration of the bacterial solution was adjusted to 1.0 × 108 CFU/mL. The bacterial solution was then added dropwise to a 20 mm agar plate. The FUA329 bacterial solution was evenly coated on the plate. Under aseptic conditions, antibiotic susceptibility papers were gently pressed onto the agar plates using forceps. While doing so, the spacing of each drug-sensitive tablet could not be <20 mm and the distance from the edge of the plate could not be <17 mm. The plates were sealed and continuously incubated at 37 °C for 14 h. The size of the inhibition circle was noted to determine the sensitivity of antibiotics. ## 2.4.3. Hemolytic Activity Evaluation The hemolytic activity was studied using the method described by Buxton. In short, E. faecium FUA027 was inoculated onto Columbia Blood Agar and incubated at 37 °C for 24 h [22]. S. aureus ATCC 12600 was used as a control strain. ## 2.4.4. Nitrate Reductase and Amino Acid Decarboxylase Activity The nitrate broth assay kit and amino acid decarboxylase assay kit obtained from Beijing Land Bridge Technology Co., Ltd. (Beijing, China). were used in the metabolic toxicity test. The test was performed following the manufacturer’s instructions. Detection of nitrate reductase activity: Under aseptic conditions, single colonies of the test strain and the quality control strain E. coli ATCC 25922 isolated from the plate were inoculated in a nitrate broth assay ampoule by using an inoculating needle. The plate was incubated at 37 °C for 24 h. After incubation, nitrate reduction reagents A and B were added dropwise at 5:2 (v:v), and the results were observed immediately. Three parallel experiments were conducted for each sample [23]. Detection of amino acid decarboxylase activity: Under aseptic conditions, a single colony of the test strain was picked from the plate by using an inoculating needle and inoculated into the amino acid decarboxylase series ampoule as well as the amino acid decarboxylase control tube. Sterile liquid paraffin was added to cover the surface of the medium, and lysine, ornithine, and arginine ampoules were incubated at 37 °C for 24 h. After the phenylalanine ampoule was incubated for 24 h, 4–5 drops of $10\%$ FeCl3 aqueous solution were added to the ampoules, and the results were observed within 2 min. Following the incubation of the tryptophan ampoules for 24 h, 2–3 drops of the Kovacs reagent were added to the ampoules and the results were observed immediately. Three parallel experiments were conducted for each sample. ## 2.5.1. Probiotic-Associated Genes in the E. faecium FUA027 Genome The Hidden Markov model (HMM) was used to find probiotic-associated genes in the genome as well as environmental tolerance-related genes [24]. Additionally, we searched for genes related to adhesion factors in the annotation results. *Putative* genes involved in antimicrobial compound synthesis and secondary metabolism gene clusters in the E. faecium FUA027 genome were identified using AntiSMASH 6.0 (https://antismash.secondarymetabolites.org, accessed on 11 December 2022) [25] and BAGEL 4.0 (http://bagel4.molgenrug.nl/index.php, accessed on 11 December 2022) [26]. ## 2.5.2. Evaluation of Acid and Bile Salt Tolerance In Vitro Referring to Pieniz et al. ’s study, the survival of strains in a simulated gastrointestinal environment was measured using the viable plate count method [27]. The strain FUA027 was grown in ABB liquid medium at 37 °C for 24 h. Then, the culture was adjusted to an optical density (OD600) of 1.0 ± 0.05. Separate preparation of ABB liquid medium of different pH values and containing different bile salt concentrations: test tubes containing 9 mL of ABB liquid medium were adjusted with HCl to attain different pH values (i.e., 2.0, 2.5, 3.0, 3.5, and 4.0). The ABB liquid medium was supplemented with bovine bile salt, thereby achieving final concentrations of $0.1\%$, $0.2\%$, $0.3\%$, $0.4\%$, and $0.5\%$ (w/v), respectively. Then, 1 mL of inoculum was added to each tube, and the normal ABB liquid medium was used as a control. Sampling was performed at 0, 1, 2, and 3 h. The samples were diluted with ABB medium and then coated and incubated on the plates for 24 h, and viable colonies on a plate were counted. The survival rate was calculated using the following formula:Survival rate (%)=(Nt/N0)×$100\%$ where Nt (log CFU/mL) represents the number of viable bacteria after t hours of treatment, and N0 (log CFU/mL) refers to the number of viable bacteria of E. faecium FUA027 before treatment. ## 2.5.3. Evaluation of the Antioxidant Activity In Vitro The FUA027 strain was cultured in ABB liquid medium at 37 °C for 18 h. The E. faecium FUA027 bacterial liquid was centrifuged (20 °C, 3000 rpm, 10 min), then discard supernatant intact cells of the strain were harvested. The cell pellet was washed twice with and suspended in 1 mL sterile distilled water [28]. The concentration of this suspension was adjusted to approximately 1.0 × 108 CFU/mL. This was considered as a sample in the antioxidant test. Using antioxidant kits from Jiancheng Bioengineering Institute (Nanjing, China), in vitro antioxidant activities were measured including the measurement of the 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical, hydroxyl radical, and superoxide anion scavenging activities [29]. ## 2.5.4. Hydrophobicity and Auto-Aggregation Tests Hydrophobicity: The E. faecium FUA027 bacterial liquid was centrifuged (20 °C, 3000 rpm, 10 min), and the pellet was washed with and suspended in distilled water. The culture suspension was adjusted to an OD600 value of 0.5 ± 0.02 (A0). Then, an equal volume of xylene solution was added to the bacterial suspension and vortexed for 20 s at 37 °C for 1 h. The absorbance of the supernatant at 600 nm (A2) was determined. Three parallel tests were conducted [30]. The hydrophobic rate was calculated using the following formula:Hydrophobic rate (%)=(A0/A2)/A0×$100\%$ Auto-aggregation: The FUA027 bacterial liquid was centrifuged (20 °C, 3000 rpm, 10 min) and washed with distilled water. Its OD600 value was adjusted to 0.5 ± 0.02 (A0). The bacterial suspension was allowed to stand at 37 °C for 4 h, and the absorbance of the supernatant at 600 nm (A2) was determined. Three parallel tests were conducted. The auto-aggregation rate was calculated using the following formula:Auto−aggregation rate (%)=(A0/A2)/A0×$100\%$ ## 2.5.5. Evaluation of Antibacterial Activity A single colony of E. faecium FUA027 was picked, inoculated into ABB liquid medium, and cultured anaerobically at 37 °C for 24 h. Then, 10 mL of bacterial solution was mixed thoroughly with an equal volume of ethyl acetate extract, vortex shaken for 30 s, and transferred to a separatory funnel. This mixture was allowed to stand at room temperature for 5 min. After the solution was stratified, the upper organic phase was collected and evaporated in a rotary evaporation flask. The rotary evaporator was used to rotary evaporate the organic phase at 60 °C for 10–15 min to ensure the absence of a smell of ethyl acetate. Then, 2 mL ethyl acetate was added to dissolve the residue in the rotary steaming bottle, fully mixed, and filtered through a nylon syringe filter (pore size: 20 μm). The liquid was collected as the antibacterial solution [31]. The experimental group was the upper organic phase of E. faecium FUA027 after extraction with ethyl acetate (concentrated five times) and the lower aqueous phase of E. faecium FUA027 after extraction with ethyl acetate. ABB medium extracts and ethyl acetate were used as blank controls. The Kirby–Bauer test for antibacterial effects: 100 μL of bacterial solution of *Staphylococcus aureus* ATCC 12600, *Escherichia coli* ATCC 25922, Yeast ATCC 24060, and Aspergillus niger ATCC 6273 were evenly applied to the plate, respectively. Then, four sterile filter papers of diameter 5 ± 0.5 mm were placed in each plate. A total of 10 μL of sample was added dropwise to each filter paper sheet and incubated for 12 h at 37 °C. Then, a vernier caliper was used to measure and record the diameter of the suppression ring. The inhibitory effect was evaluated on the basis of the inhibition circle diameter. Three independent tests were repeated [32]. ## 3.1. Genome Properties The whole genome sequence of E. faecium FUA027 contained a single, circular 2,718,096-bp-long chromosome with an average GC content of $38.27\%$ (Figure 1). The Glimmer program identified 2700 genes with an estimated coding ratio of $87.1\%$. Of them, 2617 were protein-coding genes and 83 were RNA genes. Among the 83 RNA genes, 17 genes coded for 5S, 16S, and 23S rRNAs; two genes coded for sRNAs; and 64 genes coded for tRNAs. ( Table 1). The Plasmid Finder 2.0 tool did not find any plasmid sequences. The FUA027 genome sequence was submitted to NCBI under the accession number OM670243. ## 3.2.1. Identification of Antibiotic Resistance Gene In the clinical setting, probiotic strains resistant to a particular antibiotic are typically associated with infection. Antibiotic resistance genes in the probiotic genome are not in themselves a safety issue, if the genes are not likely transferred to other strains. Instead, probiotics containing these genes could theoretically act as a source of antibiotic resistance genes for potentially pathogenic bacteria. Probiotics must also be tested for the presence of antibiotic resistance genes because studies have confirmed that these genes may be transferred in food and in the intestinal environment. Enterococcus exhibits stronger natural resistance than other Gram-positive bacteria and acquires resistance genes through various mechanisms to produce multiple high-level drug-resistant strains [33]. Amino acid sequences of E. faecium FUA027 were compared with the drug resistance gene database CARD (https://card.mcmaster.ca/, accessed on 11 December 2022), and protein sequences with >$50\%$ similarity in the comparison results were extracted as antibiotic resistance genes. Eighteen antibiotic resistance genes were identified. A predictive analysis of drug resistance genes identified 10 types of aminoglycoside antibiotics, fluoroquinolones, lincosamides, and vancomycin. Probiotic E. faecium strain T-110 and non-pathogenic strain E. faecium NRRL B-2354 both contain a plasmid, according to Natarajan et al. [ 34]. Importantly, we used the MobileElementFinder tool to search for MGEs. As expected, the absence of MGEs was confirmed. Consequently, because E. faecium FUA027 has no plasmid and none of the antibiotic resistance genes associated with it are located on MGEs, these drug resistance properties cannot be transferred to other pathogenic bacteria through mobile elements, implying no occurrence of drug resistance transmission. Thus, this study from the genetic level confirms that E. faecium FUA027 is safe for the horizontal transfer of drug resistance. To corroborate the results of antibiotic resistance gene analyses, the antibiotic sensitivity test was conducted. Nevertheless, the presence of resistance genes did not exactly match the experimental results observed. According to the results, E. faecium FUA027 was resistant to nine antibiotic types (Table 2). In total, 27 antibiotics were detected. As shown in Table A1, E. faecium FUA027 was resistant to nine types of antibiotics. Combined with the results of the antibiotic susceptibility test in vitro, the antibiotic resistance genes in the genome were analyzed. E. faecium FUA027 was safe in terms of antibiotic resistance. ## 3.2.2. Evaluation of Virulence Factor Genes and Toxin-Encoding Genes According to the gene function classification, virulence genes carried by enterococci mainly encode for proteins related to adherence, exotoxin, exoenzyme, immunomodulation, and biofilm [35]. The VFDB was used to identify virulence factor genes in E. faecium FUA027; however, most putative virulence factor genes had <$60\%$ similarity with VFDB [36]. In total, seven potential virulence factor genes were identified (Table 3). *These* genes may encode for proteins involved in adhesion, immunomodulation, exoenzyme, and biofilm. Genes encoding enterococcal hemolysin A (hlyA), cytolysin (cyl), aggregation substance (as), enterococcal surface protein, sex pheromones (cob and ccf), and serum resistance-associated gene (sra), which are well-known potential virulence factors, were missing in E. faecium FUA027. According to Deng’s study, among 110 probiotic Enterococcus spp. 35 ($31.8\%$) enterococcal strains exhibited β-hemolytic characteristics. However, in our study, FUA027 exhibited γ-hemolysis on blood plates and no genes encoding Hbl, Nhe, or cytotoxin K, which are associated with hemolysis and toxin production, were found in the genome (Figure 2). These results thus confirmed that E. faecium FUA027 would be used in the preparation of UA by industrial fermentation. ## 3.2.3. Biogenic Amine Production The results of nitrate reductase activity revealed that E. faecium FUA027 did not contain nitrate reductase. No color change was observed in the tubes containing the test strains, and the color was red after the addition of trace zinc powder, indicating that the test group was negative. The tube containing the quality control strain E. coli ATCC 25922 was red and positive. The amino acid decarboxylase activity of E. faecium FUA027 was preliminarily detected on the basis of the color change in the amino acid decarboxylase medium. With E. faecium FUA027, the color of the amino acid decarboxylase medium remained unchanged and yellow, indicating that no biogenic amines (BA) were produced in the medium by the strain. The experimental results revealed that FUA027 did not possess lysine, ornithine, arginine, tryptophan, or phenylalanine decarboxylase activities. The main source of BA in food is the microbial decarboxylation of amino acids. For example, the decarboxylation of tyrosine, ornithine, and lysine produces tyramine, putrescine, and cadaverine, respectively. BA accumulation in food has serious implications for food safety and human health [37]. Of the 129 enterococci strains of three different origins (food, veterinary, and human) screened by Sarantinopoulos et al., none produced histamine, cadaverine, or putrescine [38]. However, >$90\%$ of E. faecium strains isolated from cheese have been identified as tyramine producers. Some E. faecium strains from humans also produced putrescine [39]. E. faecium FUA027 was found to not produce BA, and thus we believe that this strain may be used safely in industrial fermentation. ## 3.3.1. Acid and Bile Salt Tolerance In Vitro Normal human gastric juice pH is approximately 1–3, and normal human intestinal pH is approximately 6.8–7.0. The pH in the stomach can rise to 4–5 after food is consumed. Probiotics can only exert their probiotic role if they resist the inhibitory effects of gastric acid and pepsin on the intestine [40]. A gene encoding conjugated bile acid hydrolase (cbh) and three genes encoding bile acid sodium symporter family proteins were discovered in E. faecium FUA027; these genes may have contributed to bile salt resistance. F0F1-ATPase is considered the main pH regulator inside cells. *Eight* genes coding for the F0F1-ATP synthase subunit were identified in the FUA027 genome. Furthermore, a cation transporter gene, two (Na+/H+) antiporter genes, and a sodium ion transporter gene linked to pH regulation and ion homeostasis were discovered (Table A2). The survival rates of E. faecium FUA027 in the in vitro acid tolerance test at different pH values are shown in Figure 3A. The survival rate declined steadily as the pH value decreased. Studies have shown that strains with a survival rate of >$60\%$ are acid-resistant strains. The survival rate of E. faecium FUA027 in the in vitro acid tolerance test at pH 3.0 was >$60\%$ and that at pH 2.0 was >$50\%$. Compared to acid-tolerant strains, E. faecium FUA027 was less acid-tolerant. Another crucial sign for assessing the qualities of possible probiotics is the tolerance of strains to high bile salt concentrations in the human gastrointestine. Studies have shown that the small intestine contains approximately $0.3\%$ of bile salts. In our study, the survival rate of the strain was higher than $67\%$ at bile salt concentrations of $0.1\%$–$0.3\%$. The strain survival rate was still >$60.00\%$ at bile salt concentrations of $0.4\%$ and $0.5\%$ (Figure 3B), which indicates that the strain has excellent bile salt resistance. We identified a gene coding for conjugated bile acid hydrolase (cbh), two conjugated bile acid hydrolase genes (namely nhaC and napA), and ABC transporter genes potentially contributing to bile salt resistance in E. faecium FUA027. *Eight* genes coding for the F0F1-ATP synthase subunit (namely atpB, atpE, atpF, atpH, atpA, atpG, atpD, and atpC) were identified in the FUA027 genome. Therefore, we suggest that the in vitro results of acid and bile salt tolerance in E. faecium FUA027 are explained by these related genes in its genome. ## 3.3.2. Antioxidant Ability In Vitro Some probiotic metabolites can lessen the oxidative damage that causes aging and chronic diseases [41]. The results of the in vitro antioxidant ability of E. faecium FUA027 are presented in Table 4. The DPPH scavenging activity of the fermentation supernatant was as high as $57.62\%$, the superoxide anion scavenging capacity was $36.23\%$, and the clearance rate of hydroxyl radical was $30.12\%$. Polysaccharides, phosphonic acid, and peptidase, which are fundamental cell wall building blocks, are crucial for antioxidation. The extracellular metabolite structure is closely related to the antioxidant activity of the fermentation supernatant. In addition, the antioxidant activities of L. plantarum and E. faecalis were studied. The DPPH scavenging activity of L. plantarum was $62.78\%$, which was close to that of E. faecium FUA027. By contrast, the activity of E. faecalis was lower than that of E. faecium FUA027. *Ten* genes associated with the oxidative stress response were found in the FUA027 genome; these genes could help the strain avoid damage by O2− and H2O2−, such as peroxide-responsive repressor (perR), NADH peroxidase (npr), alkyl hydroperoxide reductase (ahpC/F), glutathione peroxidase (gpx), superoxide dismutase (sodA), thioredoxin reductase (trxB), and glutathione reductase (gor). Among them, perR regulates H2O2− induced oxidative stress. In the presence of H2O2− or with iron and manganese ion deficiencies, perR upregulates antioxidant enzymes such as catA and ahpC/F to scavenge H2O2− and alkyl hydroperoxides (Table A2). The presence of these antioxidant genes indicated that E. faecium FUA027 has high antioxidant activity. Based on the results of genomic and phenotypic experimental analyses, we speculate that this may be due to the expression of antioxidant genes in the E. faecium FUA027 genome, such as catalase, glutathione peroxidase, and superoxide dismutation, which make FUA027 possess a good antioxidant capacity. ## 3.3.3. Evaluation of Adhesion-Related Genes Probiotics play a beneficial role by adhering to intestinal mucosa and epithelial cells. We searched for gene annotation data related to adhesion, colonization, mucin binding, flagella hook, and fibrinogen/fibronectin binding. Adhesion lipoprotein, s-ribosylhomocysteine lyasef (luxS), segregation and condensation protein B (scpB) were found in the E. faecium FUA027 genome (Table 5) [42]. Biofilms of lactic acid bacteria can colonize the intestine, thereby protecting strains in gastrointestinal transit, producing certain antimicrobial compounds, and stimulating the immune response. Auto-aggregation is a crucial property of biofilm formation, and hydrophobicity may assist in adhesion. Auto-aggregation and hydrophobicity are vital indicators of the ability of microbes to respond to bacterial gut colonization. FUA027 exhibited higher hydrophobicity and auto-aggregation than the commercial probiotic strain *Bifidobacterium longum* BB536). This demonstrates that E. faecium FUA027 can better colonize the intestinal tract, and thus exert its probiotic properties. ## 3.3.4. Antibacterial Test of E. faecium FUA027 against Quality Control Strains In the in vitro experiment, the inhibitory ability of E. faecium FUA027 against four test strains was investigated. As shown in Figure 4, FUA027 exhibited significant inhibitory effects on E. coli ATCC 25922 and S. aureus ATCC 12600, with inhibition circle sizes of 26.24 ± 0.34 mm and 22.12 ± 0.26 mm, respectively. The inhibition circle sizes were 9.2 ± 0.52 mm and 8.74 ± 0.38 mm for Yeast ATCC 24060 and A. niger ATCC 6273, respectively. E. faecium FUA027 had a significantly better inhibitory effect on bacteria than on fungi. Antimicrobial activity is a crucial property of probiotics against gastrointestinal infections. E. faecium mainly exerts its bacteriostatic effect by secreting organic acids. Furthermore, bacteriocins, bacteriocin-like, and hydrogen peroxide secreted by E. faecium can inhibit intestinal pathogenic microorganisms to some extent. Many bacteriocin-producing E. faecalis strains have been reported. Rahmeh et al. explored how E. faecium S6 exerts its antimicrobial effect by producing enterotoxins and organic acids [43]. Valenzuela et al. isolated an E. faecium PE 2-2 strain from seafood that inhibited S. aureus and demonstrated that this strain carried the enterocin A structural gene [44]. Basanta et al. reported that E. faecium L50 isolated from a Spanish dry fermented sausage produces enterocin L50 (EntL50, EntL50A, and EntL50B), enterocin P, and enterocin Q and exhibits a broad antimicrobial spectrum [45]. Enterococins are often used as a preservative for meat and dairy products. The most widely used enterococins are enterocin A and enterocin B, belonging to class II bacteriocin. In our study, four biosynthetic gene clusters associated with T3PKS, a cyclic lactone autoinducer, were identified using AntiSMASH 5.0, and BAGEL 4.0 predicted a bacteriocin from the class sactipeptide in the E. faecium FUA027 genome. Sactipeptides (sulfur-to-alpha carbon thioether cross-linked peptides) are ribosomally synthesized and post-translationally modified peptides that exhibit antibacterial activity [46]. In conclusion, in vitro experiments supported the presence and activity of extracellularly secreted bacteriocins, as they significantly inhibit the growth of E. coli ATCC 25922 and S. aureus ATCC 12600. ## 4. Conclusions In summary, we here described the whole-genome sequence of E. faecium FUA027. FUA027 has a 2,718,096-bp-long chromosome with an average GC content of $38.27\%$. Genomic screening revealed that FUA027 lacked key virulence factor genes and toxin-coding genes. Although 18 antibiotic resistance genes were screened from the strain, the strain has no plasmids or mobile elements and is therefore unlikely to undergo the acquisition and transfer of resistance genes. The safety of this strain was further confirmed through hemolysis tests, metabolic toxicity tests, and antibiotic resistance tests. The detection of antimicrobial gene clusters and adhesion- and stress-associated genes in the genome, along with the results of tolerance tests such as tolerance to acid and bile salt and in vitro antioxidant activity-related genes, revealed the probiotic properties of the strain. Genomic analysis combined with phenotypic studies confirmed the safety and probiotic properties of this strain as a potential probiotic candidate.
# Diagnostic Performance of Extrahepatic Protein Induced by Vitamin K Absence in the Hepatocellular Carcinoma: A Systematic Review and Meta-Analysis ## Abstract Background and Objectives: the early diagnosis of hepatocellular carcinoma (HCC) benefits from the use of alpha-fetoprotein (AFP) together with imaging diagnosis using abdominal ultrasonography, CT, and MRI, leading to improved early detection of HCC. A lot of progress has been made in the field, but some cases are missed or late diagnosed in advanced stages of the disease. Therefore, new tools (serum markers, imagistic technics) are continually being reconsidered. Serum alpha-fetoprotein (AFP), protein induced by vitamin K absence or antagonist II (PIVKA II) diagnostic accuracy for HCC (global and early disease) has been investigated (in a separate or cumulative way). The purpose of the present study was to determine the performance of PIVKA II compared to AFP. Materials and Methods: systematic research was conducted in PubMed, Web of Science, Embase, Medline and the Cochrane Central Register of Controlled Trials, taking into consideration articles published between 2018 and 2022. Results: a total number of 37 studies (5037 patients with HCC vs. 8199 patients—control group) have been included in the meta-analysis. PIVKA II presented a better diagnostic accuracy in HCC diagnostic vs. alpha-fetoprotein (global PIVKA II AUROC 0.851 vs. AFP AUROC 0.808, respectively, 0.790 vs. 0.740 in early HCC cases). The conclusion from a clinical point of view, concomitant use of PIVKA II and AFP can bring useful information, added to that brought by ultrasound examination. ## 1. Introduction Hepatocellular carcinoma (HCC) is the most widespread histological subtype of primary liver cancer (accounting for approximately $90\%$ of all cases [1]), with an increasing incidence [2]; at the same time, it is currently recognized as the third most common cause of death worldwide [3,4,5]. Unfortunately, at the time of diagnosis, a small percentage of patients are eligible for curative treatment, the most common cause being an advanced tumor stage [6]. Studies in the literature have shown that chronic viral hepatitis B and C, autoimmune hepatitis, nonalcoholic steatohepatitis, and genetic and epigenetic changes are the main risk factors for the development of hepatocellular carcinoma HCC [7,8,9,10,11]. The prognosis of patients with advanced liver disease or cirrhosis (regardless of etiology), even in those responding to antiviral treatment, is influenced by the appearance of HCC [4,5]. Hepatocarcinogenesis is a gradual process characterized by genetic and molecular changes in the hepatocytes, followed over time by the appearance of a neoplastic lesion detectable by imaging. Over the last ten years, the focus has shifted toward early HCC detection, this being known to influence treatment, curability of the disease and long-term survival [1,12]. Today, treatment strategies have diversified, including surgical resection, drug treatment, percutaneous treatment (ablation or chemoembolization) and liver transplant. Current guidelines recommend that in at-risk patients, the screening strategy should be based on complementary examination [13] like serum alpha-fetoprotein determination and abdominal ultrasonography at 3–6 months for early detection of HCC in groups of patients at risk [11,14,15,16,17,18,19]. Abdominal ultrasonography, however, remains an investigation with important limitations (patient-dependent or operator-dependent), being unable to detect small tumor formations with undesirable accuracy [11,20,21]. One of the traditional serum tumor markers for detecting and tracking HCC commonly used is alpha-fetoprotein (AFP) [22,23,24], its role evolving over time [23]. However, despite being widely used—being a non-invasive and affordable method—according to studies in the literature, it has suboptimal performance for the early detection of hepatocarcinoma [25]. Typically, a serum AFP level of 20 ng/mL is considered a borderline value to differentiate HCC from non-tumoral pathology [13]. Therefore, AFP has a high rate of false-negative results—approximately $40\%$—in the detection of early-stage tumors [16,26,27,28,29,30]. At the same time, a high proportion of patients with liver cirrhosis or chronic viral hepatitis without associated HCC frequently show false-positive results [23]. In these conditions, the diagnostic accuracy of determining AFP serum levels is unsatisfactory, due to low sensitivity (estimated between $39\%$ and $64\%$) and specificity (in the 76–$91\%$ range). New classes of biomarkers with promising results in the early detection of HCC, such as microRNAs (miRNAs) [1,31,32], PIVKA II, known also as des-gamma-carboxy-prothrombin (DCP), or the fucosylated fraction of the AFP fraction (AFP-L3), stanniocalcin 2, APEX1 [33,34,35,36] have now been described. However, their use in medical practice may be limited by the absence of standardized analytical determining methods. PIVKA II (first described in 1984 [37]) as an immature form of prothrombin (synthesized in the liver) can be used to estimate hepatic vitamin K status. PIVKA II seems to be a more suitable biomarker for the detection of vitamin K deficiency [38]. PIVKA II measurement shows increased sensitivity and specificity compared to other methods conventionally used (standard coagulation tests such as prothrombin time and activated partial thromboplastin time) to assess a deficiency of vitamin K [39]. In the absence of vitamin K, when its action is antagonized, or in the presence of neoplastic cells, PIVKA II is released into the blood. In patients with gastrointestinal malignancies, PIVKA II levels were increased in most patients, with previous data pointing out a good sensitivity, respectively, the specificity for PIVKA II in gastrointestinal neoplastic disorders diagnostic: $78.67\%$ and $90.67\%$ in pancreatic adenocarcinoma, $83.93\%$ and $91.50\%$ in HCC. For establishing the association of serum levels of PIVKA II with colorectal cancer, additional studies are needed [40]. Just one case report referring to the colorectal neoplasm with secondary dissemination to the liver and the presence of increased serum levels of PIVKA II was found [40]. At the same time, previously published studies showed that PIVKA II is an effective and specific biomarker for HCC. Some researchers have demonstrated that PIVKA II levels reflect the oncogenesis and progression of HCC [41]. However, the efficacy of PIVKA II has not been sufficiently studied. Serum and tissue overexpression of PIVKA II may be a specific tumor marker for HCC, showing promising results (no matter the hepatocarcinoma stage—$62.5\%$ sensitivity and $85.5\%$ specificity), but also indicating a poor prognosis, such as the presence of microvascular invasion and intrahepatic metastases [39,42]. According to current studies, elevated serum level of PIVKA II are associated with tumor size, microvascular invasion, and possible recurrence of HCC [12,22,43,44,45]. What differentiates PIVKA II from AFP is that the value of the former is not affected by liver disease activity [12]. In view of the above, the difficulty of performing adequate screening for HCC (to detect early cases), new screening methods are being examined. Current studies aim at comparative and summative evaluation of different methods, with Japan and other countries [46,47] implementing simultaneous determination of PIVKA II and AFP as a screening method to monitor patients at high risk of developing hepatocellular carcinoma [48]. To date, results on the diagnostic performance of PIVKA II in comparison to or in combination with AFP are conflicting. The available data come mainly from studies involving Asian patients [46,49], with results from Western studies limited by a relatively small sample size. Published studies (with the exception of the most recently published ones) have been systematized into past published meta-analyses, evaluating the accuracy of HCC detection by serum determination of PIVKA II and AFP biomarkers, alone or in combination in patients at risk of tumor development [44,45,48,49]. In the present one, the most recently published studies for establishing the role of PIVKA-II versus AFP (globally, but also in a relationship with the HCC stage) were taken into consideration. Knowledge of this topic is needed for better screening and diagnosis of at-risk HCC patients. The aim of this work was to extend the knowledge of comparative evaluation of PIVKA II and AFP HCC diagnostic values, especially in early HCC patients. ## 2. Materials and Methods Search strategy: literature screening for meta-analysis. A systematic search was conducted for the interval from 1 January 2018 to 4 September 2022. Searches for relevant studies were mainly conducted in PubMed, Web of Science, Embase, Medline and the Cochrane Central Register of Controlled Trials. All publications from the databases mentioned above were reviewed, using the terms (((‘descarboxyprothrombin’ OR des-gamma-carboxy prothrombin) AND (‘liver cell carcinoma’ OR ‘hepatocellular carcinoma’) AND ‘cancer diagnosis’) OR (‘pivka’ AND (‘liver cell carcinoma’ OR ‘hepatocellular carcinoma’) AND ‘cancer diagnosis’) OR (‘DCP’ AND (‘liver cell carcinoma’ OR ‘hepatocellular carcinoma’) AND ‘cancer diagnosis’)) AND ((‘alphafetoprotein’ OR afp OR ‘alpha fetoprotein’ OR alfa-fetoprotein) AND (‘liver cell carcinoma’ OR ‘hepatocellular carcinoma’) AND ‘cancer diagnosis’). Only human studies from the mentioned period were selected for screening. Rigorous research of the papers was performed. Two main investigators performed independent literature research in order to identify the previously published papers. All useful papers were read by both investigators, even those with negative results. Duplicates were removed. Only articles written in English that had abstracts were taken into consideration. Articles presented just as abstracts or conference presentations, reviews, systematic reviews, meta-analyses, editorials and in vitro studies were excluded. The quality assessment of diagnostic accuracy studies (QUADAS) was applied to evaluate the selected studies from a quality point of view. The following data were extracted from the articles studied: title, authors, year of publication, study identification item, country, number of locations where the study was conducted, number of patients included (with HCC vs. without HCC, respectively, early HCC cases), study design, etiology of liver disease; for both PIVKA II and AFP, the AUROC (overall and for early HCC cases), sensitivity and specificity were followed. A flow diagram of the literature search strategy and study selection process is summarized in Figure 1. According to the literature, at this moment, two tumor staging systems are used to define the extent of HCC—BCLC (Barcelona clinic liver cancer staging) staging [50,51], respectively, the 8th edition American Joint Committee on Cancer tumor–node–metastasis (TNM) staging [52]. BCLC stage 0 is defined as the tumor being less than 2 cm, performance status = 0 and the liver working normally (Child–Pugh A). BCLC stage A is defined in patients presenting single tumors of any size or 3 nodules < 3 cm in diameter, performance status = 0 and Child–Pugh class A or B. In this meta-analysis, early-stage HCC was defined as BCLC stage 0/A and/or 8th edition TNM stage I (depending on the data reported by the included studies). ## Statistical Analysis MedCalc software version 20.115 (Ostend, Belgium) was used for performing the meta-analysis. Using for every study each AUC value and the corresponding standard error (SE), the weighted summary AUC (sAUC) was calculated. Most of the studies did not report the standard error for AUROC. The formula used for SE (AUC) calculation was the one proposed by Hanley and McNeil [1982]—presented in Formula [1]. The publication bias was assessed using funnel plots. Forrest plots showing the overall effect were constructed. Taking into consideration the presence or absence of heterogeneity, a fixed or random effects model was preferred. An I2 value >$25\%$ was considered indicative of heterogeneity. Formula [1]—AUROC standard error estimation [1]SEAUC=AUC1−AUC+N1−1Q1−AUC2+N2−1Q2−AUC2N1N2 where Q1=AUC2−AUC; Q2=2AUC1+AUC; N1—positive group (with HCC); N2—negative group (without HCC). A p value < 0.05 was considered statistically significant. ## 3. Results A total number of 37 studies were included in the meta-analysis. Overall, 13,236 patients were included: 5037 patients with HCC (case group) vs. 8199 patients (the control group). The control group was represented by healthy patients (without previous liver diseases), chronic hepatitis B or C, liver cirrhosis or at-risk condition patients. Patients with HCC were divided depending on their HCC stage—1513 early HCC. Complete data about the included studies are presented in Table 1. For each study included, the performances of PIVKA II and AFP were reported in Table 2 and Table 3 (global and in early HCC cases). Sensibility and specificity for PIVKA II and AFP were also reported. The sAUC of AFP, respectively PIVKA II for the discrimination between patients with HCC and those without, were 0.808 ($95\%$ CI 0.782 to 0.834) vs. 0.851 ($95\%$ CI 0.823 to 0.878)-data were reported in Figure 2. Considering that the studies showed heterogeneity (in both cases), random effects models were applied. Taking into consideration the capacity of discrimination in early HCC cases, the sAUC of AFP, respectively, PIVKA II were 0.740 (CI $95\%$ 0.694 to 0.787), respectively, 0.790 ($95\%$ CI 0.751 to 0.828)–data were reported in Figure 3. Some of the studies reported at the same time for AFP and PIVKA II; also, there were some studies reporting global for HCC, but also for early HCC; AFP = alpha-fetoprotein; PIVKA II = protein induced by vitamin K absence or antagonist-II. ## 4. Discussion In our days, the neoplastic diseases show an increasing prevalence, with HCC being more and more frequently diagnosed, even in young patients. Lifestyle changes with an increased incidence of nonalcoholic steatohepatitis, chronic viral hepatitis and autoimmune hepatitis increase the risk of HCC, being responsible for HCC appearance. A lot of progress has been made in HCC diagnosis, but some cases are missed or late diagnosed in advanced stages of the disease. Therefore, new tools (serum markers, performant imagistic technics) are continually being reconsidered. For decades, AFP has been widely used as a tumor marker in the surveillance of populations at high risk of developing HCC, but some limitations are well known. The reported sensitivity and specificity of this biomarker (40–$65\%$, 76–$96\%$, respectively) differ significantly depending on the characteristics of the studied group [22,86]. AFP serum values were often elevated in patients with chronic liver disease or cirrhosis without HCC [22]. All current guidelines recommend the additional use of imaging diagnosis in order to improve the diagnostic accuracy. Ultrasonography, CT or MRI present limitations, sometimes encountering difficulties in small lesion diagnosis. In view of these data, AFP and ultrasonography have been used together to improve diagnostic sensitivity in medical practice [3,86,87,88,89,90], but accuracy for the moment remains uncertain [91]. Under these conditions, HCC screening can be improved to detect neoplastic lesions at early stages. To date, several promising serum tumor markers with the potential for early diagnosis and surveillance of HCC have been proposed [3,21,44,45,86,87,88,89,90,92,93] of which PIVKA II appears to be the most promising, with recently published data on its performance (alone or in combination with AFP or ultrasonography). No clear PIVKA II cut-offs for HCC, respectively, for early HCC diagnosis were already established. Supplementary, different methods are used for biomarker determination—so, for clarifying these aspects, more data need to be published. To this moment, to our best knowledge, clinical and laboratory factors influencing the PIVKA II values have not been exhaustively investigated. The current meta-analysis brings to attention new data about the usefulness and ability of PIVKA II to detect HCC. Literature is scarce in revealing the role of PIVKA II versus AFP. The paper provides an overview of recently published data about the role of PIVKA II vs. AFP in HCC diagnosis. In this meta-analysis, PIVKA II presented greater accuracy for HCC diagnosis, taking into consideration all cases (0.851 vs. 0.808), but also in early HCC cases (0.790 vs. 0.740). The reported results (the better discriminatory value of PIVKA II) are in line with those reported by Caviglia [3] (11 studies, published between 2011 and 2017), Chen H [94](27 studies, 2000–2016), Fan J [7] (40 studies, up to December 31, 2018), Fang Y [95] (28 studies, 2015–2021), Xing H [87] (31 studies, up to December 20, 2017). Also, the reported calculated AUROCs were approximately similar to those reported in previously mentioned studies. There also has been published some meta-analysis evaluating just one of the two biomarkers (PIVKA II or AFP), the results regarding the AUROC values being consistent with the results of this study [89,96,97]. A novel perspective brought to attention a parallel with the standard marker used (AFP)—higher accuracy for PIVKA II being found in the early diagnosis of HCC. Similar data have been published by Xing [87]. The results of this study highlight the possible role of PIVKA II in providing new data, useful for daily medical practice. A recently published study (just a few days ago, not included in the meta-analysis) also revealed that PIVKA II had a better predictive performance vs. AFP global and in early-HCC (the reported registered values being approximately similar to these ones [22]). The results of the study are supporting others’ recommendations that PIVKA II can have usefulness in early HCC diagnosis in the incipient moments [66]. It was impossible to determine the PIVKA II and AFP performances, depending on the etiology of the liver diseases, with mixed etiology being taken into consideration. Frequently, the studies do not make a difference according to the liver disease etiology; in most of the cases, the reported results are globally calculated. The 0 and A HCC BCLC classes were taken into consideration in a unitary way, which must be mentioned. Of course, that is a discrepancy between the two classes regarding the following treatment, but for the moment it was not possible to perform a detailed, stratified analysis. Due to the heterogeneity of the reported studies (determined by the diversity of study populations in different countries, methodology used and sample size), these findings might not be representative of all populations—further research is needed. Stratified analysis depending on the gender, ethnicity, age or liver disease type and stage represents an area to be explored further. More data should be published regarding the cut-off values, for a unitary approach regarding HCC diagnosis. This study provides the backbone for a future meta-analysis in order to evaluate the accuracy of PIVKA II and AFP association in HCC diagnosis. Of the listed studies, just a few of them reported combined accuracy. In addition, future studies on the topic are recommended to determine the serum values of PIVKA II after HCC treatment (surgical or chemotherapy), theoretically bringing useful information for monitoring treatment results, for predicting diagnosis, relapse and survival. ## 5. Conclusions These results provide a significant step toward the diagnosis of HCC by determining the serum value of vitamin-K-dependent proteins used as tumor biomarkers, along with other paraclinical examinations. From a clinical and practical point of view, the use of PIVKA II concomitantly or instead of AFP is bringing useful information, added to those reported by ultrasound examination. Probably the emerging role of PIVKA II is in patients with previous hepatic diseases (hepatitis, cirrhosis) where AFP limitations are well-known. This study provides the backbone for future studies on the relationship with an earlier diagnosis of hepatocarcinoma.
# Antioxidant and In Vivo Hypoglycemic Activities of Ethanol Extract from the Leaves of Engelhardia roxburghiana Wall, a Comparative Study of the Extract and Astilbin ## Abstract The leaves of *Engelhardia roxburghiana* Wall (LERW) has been used as sweet tea in China throughout history. In this study, the ethanol extract of LERW (E-LERW) was prepared and the compositions were identified by HPLC-MS/MS. It indicates that astilbin was the predominant component in E-LERW. In addition, E-LERW was abundant in polyphenols. Compared to astilbin, E-LERW presented much more powerful antioxidant activity. The E-LERW also had stronger affinity with α-glucosidase and exerted more vigorous inhibitory effect on the enzyme. Alloxan-induced diabetic mice had significantly elevated glucose and lipid levels. Treatment with E-LERW at the medium dose (M) of 300 mg/kg could reduce the levels of glucose, TG, TC, and LDL by $16.64\%$, $12.87\%$, $32.70\%$, and $22.99\%$, respectively. In addition, E-LERW (M) decreased food intake, water intake, and excretion by $27.29\%$, $36.15\%$, and $30.93\%$, respectively. Moreover, E-LERW (M) therapy increased the mouse weight and insulin secretion by $25.30\%$ and $494.52\%$. With respect to the astilbin control, E-LERW was more efficient in reducing the food and drink consumption and protecting pancreatic islet and body organs from alloxan-induced damage. The study demonstrates that E-LERW may be a promising functional ingredient for the adjuvant therapy of diabetes. ## 1. Introduction Nowadays, diabetes is a high incidence disease seriously challenging human health. Diabetic patients occupy $10\%$ of the world’s population. The complications of diabetes include renal injury, retinopathy, diabetic cataract, diabetic foot, coronary disease, and so on, which not only make the patients suffer great pain but also bring heavy economic burden on families and society. How to protect and treat diabetes has become a major concern in food and medicinal fields. Natural plants and their active ingredients exhibit multi-target, multi-pathway, and multi-directional hypoglycemic characteristics. Compared to chemical drugs, herbal medicines have mild and sustained effects with low toxicity. The multi-target property not only benefits glucose modulation, but also contributes to the alleviation of diabetic complications. A natural product with known hypoglycemic activity is becoming a promising alternative to the current drugs for diabetic therapy. Engelhardtia roxburghiana Wall (ERW) is a subtropical tree grown in the Guangdong, Guangxi, and Fujian provinces of China. The leaves of ERW (LERW) have been used as sweet tea in Chinese folk medicine to treat obesity, fever, and pain for a long time. Due to the abundance in flavonoids and phenols, LERW has multiple physiological activities, including inhibition of aldose reductase, bladder protection, as well as anticoagulant, hypolipidemic, and antioxidant activities [1]. Flavonoids such as astilbin, taxifolin, and engeletin are the main active ingredients responsible for the functions of LERW [2]. Among them, astilbin is the predominant component and is regarded as an important indicator to evaluate the quality of LERW. As the major constituent of LERW, astilbin possesses versatile biological activities. Astilbin was able to inhibit the generation of superoxide anion and the peroxidation of microsomal lipid, thereby protecting red blood cells from oxidization and hemolysis [3]. Astilbin had an inhibitory effect on recombinant human aldose reductase and hampered the formation of advanced glycation end products, showing the potential in the prevention and treatment of diabetic syndrome [4]. Astilbin also presented its effects in the treatment of diabetes and related secondary complications [5], such as diabetic nephropathy. In addition, astilbin displayed the lipid-lowering capacity in rats by increasing the activity of lipoprotein lipase and promoting the lipolysis of rat fat pads [6]. Astilbin is the chief constituent of LERW. As the hypoglycemic effect of astilbin has been reported extensively, LERW is also assumed to possess hypoglycemic function. The safety and low toxicity of LERW have been well verified by its long-term usage as sweet tea, which makes it hold more immerse prosperity to serve as a healthcare product for the protection and treatment of diabetes. Astilbin, the primary active component, may be more efficient than the extract of LERW (E-LERW) in lowering glucose level. Nevertheless, there is another possibility that owing to the synergetic effect of other polyphenols present in LERW, the extract might possess stronger strength. It is important to clarify the activity difference between the purified component and E-LERW before the designing of LERW-based diabetic care products. This study aimed to compare the antioxidant activity of E-LERW and astilbin and evaluate their hypoglycemic effect via an in vitro α-glucosidase inhibitory test and an in vivo diabetic mouse model. HPLC coupled with tandem MS was used to determine and identify the polyphenols in E-LERW to illustrate the relationship between the hypoglycemic effect and the compositions of the extract. ## 2.1. Materials LERW was purchased from Youluhuan Ecological Agriculture Co., Ltd. (Bozhou, China). 2,2-diphenyl-1-picrylhydrazyl (DPPH), 2,2’-azobis-3-ethylbenzothiazoline-6-sulfonate (ABTS), α-glucosidase, tannins, acarbose, and rutin were obtained from Shanghai Yuanye Biotechnology Co., Ltd. (Shanghai, China). p-nitrophenyl-β-D-galactopyranoside (pNPG) was obtained from Alfa Aesar Chemical Co., Ltd. (Shanghai, China). Astilbin with $98\%$ purity was purchased from Priva Technology Development Co., Ltd. (Chengdu, China). Liposomes were prepared by our lab with the size of 131.84 ± 0.67 nm [7]. ## 2.2. Preparation of E-LERW The dried LERW was crushed, passed through a 60-mesh sieve, and extracted with $60\%$ ethanol (v/v). The extraction was conducted with a MAR-3 microwave reactor (Shanghai Yuezong Instrument Company, Shanghai, China) under 56 °C for 67 s. The material-to-liquid ratio was 1:15. After the extraction, the sample was filtered, concentrated under reduced pressure, and finally freeze-dried to obtain the E-LERW [8]. ## 2.3. Identification by HPLC-MS/MS The extract was prepared into 1 mg/mL solution with $60\%$ ethanol (v/v), filtered through a 0.22 μm microporous membrane, and separated on a Waters Acquity UHPLC BEH-C18 column (2.1 mm × 100 mm, 1.7 μm). The analysis was performed by an UHPLC system coupled with Xevo triple quadrupole electrospray tandem MS (Micromass Waters, Milford, MA, USA). The electrospray ionization source (ESI) was used for the determination of the components, and the full MS/dd-MS2 scan mode for qualitative and quantitative analysis. The sample of 10 μL was injected into the system. The mobile phase consisted of acetonitrile and $0.1\%$ acetic acid (22:78, v/v) with the flow rate of 0.7 mL/min. The column temperature was 35 °C. Identification was performed by multiple reaction monitoring (MRM). The ions were detected in both positive and negative mode with m/z 100–1000. The other parameters of MS were set as follows: spray voltage 3.0 kV, S-lens voltage 50 V, capillary temperature 350 °C, and auxiliary gas heating temperature 350 °C [9]. In addition, the on-line UV spectrums of the components were obtained through diode array detection (DAD). The wavelength with maximum absorbance was determined. ## 2.4.1. Total Flavonoids The sample was prepared into 1 mg/mL with $60\%$ ethanol. The content of total flavonoids was determined using the sodium nitrite–aluminum nitrate colorimetric method [10,11] and was expressed as mg rutin equivalent (mg RE)/g. In this study, the absorbance of the reference rutin changed linearly with the concentration in the range from 10 to 200 μg/mL. The regression equation was $A = 11.094$C − 0.0018 (r2 = 0.9993). ## 2.4.2. Total Phenols The total phenols in E-LERW were determined using the methods reported by Yao et al. and Dirar et al. [ 12,13] and were expressed as mg gallic acid (GA) equivalent (mg GE)/g. The absorbance of the reference GA was linear with the concentration ranging from 10 to 500 μg/mL. The regression equation was $Y = 102.2$X + 0.0616 (r2 = 0.9991). ## 2.4.3. Astilbin The sample was analyzed by a LD-20AD HPLC system (Shimadzu, Tokyo, Japan). The separation was performed on a SinoChrom ODS-BP column (4.6 mm × 150 mm, 5 μm). The detection conditions were the same as described in Section 2.3. The detection wavelength was 291 nm with the injection volume of 20 μL. In the range of 0.02 to 1.0 mg/mL, the peak area of astilbin was linear with the concentration. The regression equation was $Y = 54756$X − 255.86 (r2 = 0.9992). ## 2.5.1. Scavenging DPPH Free Radicals E-LERW and astilbin were prepared into a series of solutions, which contained astilbin from 0.2 to 1 mg/mL, respectively. The determination was carried out according to what Makgatho et al. reported [14]. Ascorbic acid was set as the positive control. ## 2.5.2. Scavenging ABTS+ Radicals The measurement was conducted following the method reported by Aruwa et al. [ 15]. ## 2.5.3. Ferric Reducing Activity of Power The ferric reducing activity of power (FRAP) of E-LERW and astilbin were determined conforming to the method proposed by Hao et al. [ 16]. ## 2.5.4. Inhibition of Lipid Membrane Oxidation The lyophilized liposomes were re-dispersed in deionized water, from which 0.5 mL was drawn out and blended with 0.5 mL of E-LERW or astilbin at different concentrations. The sample was incubated at 37 °C for 1 h. Subsequently, 1 mL of $1\%$ thiobarbituric acid was added, boiled for 10 min, and cooled to room temperature. The solution was centrifuged at 1000 r/min for 10 min. The absorbance of the supernatant was measured at 532 nm (A). Meanwhile, the absorbance of blank control (A0) was determined using 0.5 mL deionized water in place of the sample. Tannic acid was set as the positive control. The inhibitory rate was calculated according to the following equation (Equation [1]) [17]:Inhibitory rate = (A0 − A)/A0 × 100[1] ## 2.6. Inhibitory Effect on α-Glucosidase The inhibitory effect on α-glucosidase was examined according to the method described by Broholm et al. [ 18]. Briefly, the sample of 50 μL was blended with 50 μL α-glucosidase of 0.5 U/mL, and incubated under 37 °C for 30 min. Afterward, 1 mM substrate pNPG of 50 μL was added and reacted at 37 °C for another 30 min. The reaction was terminated by adding 0.2 M sodium carbonate of 50 μL. The absorbance at 405 nm was determined. In addition, using PBS to replace the enzyme, the background absorbance was measured in parallel. The inhibitory curve was constructed using the inhibitory rates versus astilbin concentrations. Acarbose was set as the positive control. ## Kinetic Analysis on the Inhibition of α-Glucosidase The concentration of α-glucosidase was fixed at 0.5 U/mL. The inhibitory velocity of E-LERW and astilbin on α-glucosidase was determined under different concentrations of substrate pNPG [19]. The double reciprocal curves were plotted based on the following Lineweaver–Burk equation:[2]1v=Kmvmax1+IKi1S+1vmax1+IαKi and a secondary plot was constructed as Equation [3]:[3]Slope=KmVmax+KmIVmaxKi where v is the inhibitory velocity of the sample on α-glucosidase and [I] and [S] represent the concentration of inhibitor and substrate, respectively. Ki and Km are the inhibition constant and Michaelis–Menten constant, respectively. α is a constant standing for the ratio of uncompetitive inhibition to competitive inhibition. ## 2.7.1. Animal Experiment Design The animal experiment was approved by the Ethics Committee of Chengdu University, Chengdu, China (protocol number: CDPS 2020-122), and all procedures adhered to European Community Guidelines ($\frac{86}{609}$/EEC) for the Care and Use of Laboratory Animals. Male Kunming mice, weighing 18 to 22 g, were purchased from Chengdu Dashuo Experimental Animal Company (Chengdu, China). Before the experiment, all mice were allowed to adapt to the environment for 3 days. The mice in the normal control (NC) group were fasted but had free access to water for 12 h, and fasting blood glucose (FBG) was measured via the tail vein, which was used as the basic blood glucose level of normal mice. The rest of the mice were fasted for 24 h, followed by the intraperitoneal injection of alloxan at 200 mg/kg to develop a diabetic mouse model [20]. The fasting blood glucose was measured after 3 days. The mice with the blood glucose level over 11.1 mmol/L were diagnosed as diabetic mice and were randomly divided into 6 groups with 6 mice in each group. The groups include the model control of diabetes (MC); astilbin control (AC) with the dosage of 30 mg/kg; the positive control (PC) of metformin hydrochloride at the dose of 100 mg/kg; and E-LERW groups of high (H), medium (M) and low dose (L) at 600, 300, and 150 mg/kg, which were equivalent to the dose of 56.88, 28.44, and 14.22 mg astilbin/kg, respectively. The oral gavage was performed twice a day and consecutively lasted for 28 d [21]. The scheme of the experimental design was displayed in Figure 1. ## 2.7.2. Oral Glucose Tolerance Test At the final week of treatment, all mice were orally given a glucose solution of 1.5 g/kg after being fasted for 12 h [22]. The blood glucose level was measured every half hour. Oral glucose tolerance test was expressed as AUC in 2 h. ## 2.7.3. Blood Sample Analysis When the experiment was completed, the mice were sacrificed by breathing carbon dioxide. The mouse blood was collected in a tube pre-coated with heparin sodium and was centrifuged at 3000 r/min for 10 min. The supernatant serum was stored at −20 °C until measurement. The levels of insulin, triglyceride (TG), total cholesterol (TC), high density lipoprotein (HDL), and low-density lipoprotein (LDL) were measured by commercial ELISA kits (Nanjing Jiancheng Bioengineering Institute, Najing, China). All the determinations were carried out according to the instructions of the reagent kits. ## 2.7.4. Organ Index After the mice were sacrificed, the livers and kidneys were detached from the body, placed on filter paper to remove blood, and weighed, respectively. The weight ratios of organ to body (organ indexes) were calculated. ## 2.8. Data Analysis All data are expressed as mean ± standard error. The diagrams were plotted using Origin 8.0 (OriginLab Corporation, Northampton, MA, USA). The difference between the data was evaluated by one-way analysis of variance (ANOVA) and Duncan’s test using SPSS version 10.0 software (IBM SPSS Inc., Chicago, IL, USA). The difference was considered statistically significant when $p \leq 0.05.$ ## 3.1. HPLC-MS/MS Analysis The chromatogram and MS identification results of E-LERW are shown in Figure 2 and Table 1, respectively. A total of 10 components were identified with reference to the database of the instrument. α-Lactose was determined by the molecular ions of m/z 360.1497 (M+NH4)+ and 365.1050 (M+Na)+. The ion with m/z 145.0494 was assigned to hydroxypropyl pyran, which removed one water and formed the ion of m/z 127.0390. The ion further dissociated one propylene and yielded the ion with m/z 85.0289. Malic acid had the MS2 fragments of m/z 115.0023 (M-H-H2O, A) and 71.0125 (A-CO2). Compound 3 displayed the ion of hydroxyl triazole ring with m/z 96.9682, which eliminated one water and produced the ion of m/z 78.9576. Quercetin presented the MS2 fragments of m/z 285.0385 (M+H-H2O, C), 257.0442 (C-CO, D), and 238.9389 (D-CO). In addition, the fragment of m/z 183.0285 was the reduced product from the flavone bone structure exclusive of catechol [23]. The ion 153.0181 was catechol lactone ring (C6H2(OH)2(OCOO)). Astilbin displayed the MS2 fragments of m/z 303.0607 (M-H-rhamnose, E) and 285.0400 (E-H2O). The ion of m/z 178.9975 was the oxidized flavone bone structure in the absence of catechol. This fragment removed one carbon oxide and formed the ion of m/z 151.0024. The compound engeletin and taxifolin also had the characteristic ions of 179 and 151, as astilbin presented. In addition, the peak of m/z 269.0452 in the spectrum of engeletin attributed to the detachment of one rhamnose from the parent molecule. Taxifolin presented the ions with m/z 285.0401 (M-H-H2O) [24] and 125.0231, which were assigned to pyrogallol [23]. The MS2 of citric acid included the ions of m/z 111.0074 and 87.0074, which was in accordance with what AliAbadi et al. reported [25]. Compound 6 and 7 failed to be detected in the MS2 due to the weak fragment signals. The flavonoid-like compounds from 4 to 9 had the maximum absorbance wavelength of around 290–295 nm [26]. Quercetin and maritimetin included the maximum wavelength of over 300 nm due to longer conjugate structure. ## 3.2. Determination of Active Components The contents of astilbin, total flavonoids, and total phenols in E-LERW were 94.79 ± 2.49 mg/g, 153.42 ± 2.74 mg RE/g, and 255.74 ± 4.16 mg GE/g, respectively. It indicates that E-LERW is enriched in polyphenols. ## 3.3. Antioxidant Activity The results of E-LERW in scavenging DPPH free radicals, ABTS+ free radicals, FRAP, and inhibition against lipid membrane oxidation are shown in Figure 3. The activity of both E-LERW and astilbin presented a concentration-dependent mode. The activity increased with the elevation of concentration. At different concentrations, the capacity of E-LERW in scavenging free radicals was significantly higher than that of astilbin ($p \leq 0.05$, Figure 2A,B). Meanwhile, E-LERW also exhibited much stronger FRAP over astilbin ($p \leq 0.05$, Figure 2C). E-LERW presented a more potent capacity in inhibiting the oxidation of lipid membrane as well (Figure 2D). When the concentration amounted to 2 mg/mL, E-LERW prevented $75\%$ lipid membrane from oxidation while the inhibitory rate of astilbin was only less than $20\%$ at the same concentration. The inhibitory effect of astilbin kept low even as the concentration reached 10 mg/mL. The control of ascorbic acid presented much stronger antioxidant activity over both astilbin and E-LERW in the examined concentration range ($p \leq 0.01$). When the concentration was below 1.5 mg/mL, tannic acid exhibited significantly higher inhibitory capacity against lipid membrane oxidation ($p \leq 0.01$). ## 3.4.1. Inhibition on α-Glucosidase The inhibitory effect of E-LERW and astilbin on α-glucosidase is shown in Figure 4A. The inhibitory rates of both the samples and the control acarbose presented a concentration-dependent manner. The effect increased with the elevation of concentration. The inhibitory strength of E-LERW was remarkedly higher than that of astilbin in the examined concentration range ($p \leq 0.05$). Meanwhile, the control acarbose displayed much stronger inhibitory activity than E-LERW and astilbin ($p \leq 0.05$). The concentration with $50\%$ inhibitory rate (IC50) of E-LERW, astilbin, and acarbose was 0.46 ± 0.09, 1.12 ± 0.17, and 0.19 ± 0.03 mg/mL, respectively. ## 3.4.2. Inhibitory Kinetic Analysis The Lineweaver–Burk curves of E-LERW and astilbin are shown in Figure 4B,C, respectively. The increase of the concentration accompanied with the elevation of the vertical axis intercept (1/Vmax), as well as the decrease of the net value of horizonal axis intercept, indicate that the interaction between the samples and α-glucosidase belonged to a mixed mode [19]. The secondary plot using slope-versus-inhibitor concentration was linear (Figure 4D,E), showing that both E-LERW and astilbin had a single inhibitory site on α-glucosidase. The calculated Ki of E-LERW and astilbin was 0.145 and 0.474 mg/mL, respectively. ## 3.5.1. Body Weight, Food Intake, Water Intake and Excretion Table 2 shows the body weight, the amounts of excretion, and food and water consumption of mice in different groups. On the first day of alloxan injection, the diabetic mice had similar food intake to normal mice, but with more than threefold the water consumption and, as a result, over three times the excretion compared to the normal mice. This demonstrated a successful establishment of a diabetic mouse model. Though the body weights of mice in all groups increased after 28 d, the weights of the mice injected with alloxan were significantly lower than those in normal control (NC) group, who received no injection ($p \leq 0.05$). Nevertheless, compared to the model control (MC) group without any therapy, the groups with the treatment of metformin (PC), astilbin (AC), and E-LERW of high (H) and medium dosage (M) had the weight increment of $49\%$, $18\%$, $38\%$, and $25\%$, respectively, affirming the remedy effectiveness of metformin, astilbin, and E-LERW on diabetes. Though the weights of diabetic mice decreased, their food intake, water intake, and excretion increased dramatically ($p \leq 0.01$). The food and drink consumed by the mice in MC group were 1.8 and 6.3 times the amount consumed by normal mice. After the treatment of metformin, astilbin, and E-LERW at high (H), medium (M), and low dosage (L), the food intake diminished to 1.13, 1.38, 1.18, 1.31, and 1.74 times the normal intake, respectively. The drinking dropped to 2.79, 4.38, 3.33, 4.02, and 6.16 times normal drinking, respectively. The excretion of MC mice was seven times that of normal mice. Through treatment with different samples, the excretion reduced to 3.21, 6.30, 3.93, 5.04, and 7.09 times the normal amount, respectively. The results show that metformin (PC) has the most powerful therapeutic effect, followed by E-LERW (H) and (M). Astilbin (AC) and E-LERW (L) have weak activity in alleviating the symptoms triggered by a high glucose level. ## 3.5.2. Fasting Blood Glucose and Insulin Figure 5A shows the fasting blood glucose (FBG) levels of mice receiving different treatments during 28 d. As time progressed, the MC and the group fed with E-LERW (L) maintained high and invariable glucose levels. Other diabetic mice treated with different samples had a gradually declining FBG. On day 28 of the therapy, the FBG of the mice receiving metformin, astilbin, and E-LERW (H) and (M) was reduced to $35\%$, $87\%$, $65\%$, and $83\%$ level of MC group, respectively. Metformin again presented the strongest hypoglycemic activity. Astilbin and E-LERW exhibited moderate strength. E-LERW (M) included approximately $10\%$ astilbin, which was equivalent to the AC group. Figure 5B indicates that the injection of alloxan severely damaged the function of islet. The insulin level of MC mice was only $4.7\%$ that of normal mice. Under the treatment of metformin, astilbin, and E-LERW (H, M and L), insulin secretion was restored to $72.0\%$, $15.0\%$, $59.5\%$, $28.0\%$, and $5.4\%$ normal level, implying that astilbin and E-LERW helped to restore the damaged islets. ## 3.5.3. Oral Glucose Tolerance Test Oral glucose tolerance and the corresponding area under the curve (AUC) of each group are displayed in Figure 4C,D, respectively. The results show that the glucose peak values of all groups were reached in 30 min after the oral administration of glucose, followed by a gradual decrease. The glucose peak concentration of MC was increased to 3.74 times that of normal mice. After the treatment of metformin, astilbin, and E-LERW (H, M, and L) for 28 d, the peak level was reduced to 1.81, 3.38, 2.70, 3.41, and 3.71 times the normal level, showing the therapeutic effect of metformin, astilbin, and E-LERW in improving the oral glucose tolerance of diabetic mice. AUC is another indicator to assess the oral glucose tolerate. The AUC of MC was 3.94 times that of normal mice, verifying the alloxan-induced impairment of glucose tolerate. The value was reduced to 1.67 and 3.43 times the normal level after the remedy of metformin and astilbin, respectively. E-LERW (H, M, and L) decreased the AUC to 2.64, 3.33, and 3.93 times the normal value. The trend was similar to the effects of various samples in diminishing glucose peak concentration. Meanwhile, the hypoglycemic activity of E-LERW (M) was consistent with that of astilbin control. ## 3.5.4. Blood Lipid Analysis Patients with diabetes and prediabetes are always at increased risk of dyslipidemia and cardiovascular disease [27]. As shown in Table 3, the injection of alloxan also significantly increased the levels of TG, TC, and LDL, and remarkedly reduced the concentration of HDL in MC mice ($p \leq 0.01$). The administration of various samples decreased the lipid levels and boosted HDL concentration to different degrees. The lipid lowering strength was metformin > E-LERW (H) > astilbin and E-LERW (M) > E-LERW (L) ($p \leq 0.05$). E-LERW (M) presented stronger activity in reducing TC and LDL with respect to astilbin, but the difference was not significant ($p \leq 0.05$). ## 3.5.5. Effects of E-LERW on Organ Indexes of Liver and Kidney The status of high glucose level impairs livers and kidneys as well. The organ indexes of mice in each group are shown in Table 4. Compared to the normal mice, the liver index of the MC group increased $33\%$. Other groups such as metformin, astilbin, and E-LERW (H, M, and L) elevated $8\%$, $27\%$, $12\%$, $21\%$, and $34\%$, respectively. The kidney index of the MC group increased $67\%$, while that of the treatment groups rose $8\%$, $51\%$, $27\%$, $44\%$, and $65\%$, respectively. It indicates that diabetes exerts a more detrimental impact on kidneys. E-LERW has the function of preventing liver and kidney swelling. The medium dose exhibited stronger capacity than the purified compound astilbin in protecting the organs. ## 4. Discussion Compared to astilbin, the LERW presented much stronger antioxidant as well as α-glucosidase-inhibitory activity in vitro. Perez-Najera et al. obtained astilbin enriched extract from *Smilax aristolochiifolia* Root with astilbin at 48.76 mg/g [28]. The inhibitory rate of the extract against α-glucosidase was lower than $10\%$. The vigorous strength of E-LERW may originate from the integrative effect from both astilbin and other flavonoids present in LERW, such as quercetin and engeletin. Moreover, in the inhibitory kinetic test, the Ki of astilbin was 3.27 times that of LERW, implying that the affinity between the enzyme and LERW was much stronger than astilbin. In the animal experiment, E-LERW significantly lowered blood glucose levels of mice triggered by alloxan. The group of E-LERW (M) had a similar content of astilbin to the group of astilbin control (AC). Though E-LERW exhibited much stronger antioxidant and glucosidase-inhibitory effects over astilbin, compared to AC, E-LERW (M) did not display more powerful effect in lowering fasting glucose level or enhancing oral glucose tolerance. The possible reason is that the hypoglycemic process involves various complex mechanisms—for example, decreasing glucose absorption from small intestine, hindering glucose production in vivo, prompting glucose uptake by tissues, enhancing glucose clearance from body, and so on [29]. Recent studies found that DNA methylation, histone modification, and non-coding RNA expressing also contribute to the pathogenesis of diabetes [30]. Inhibition on α-glucosidase only means the yield of glucose is reduced and glucose absorption is slowed down. It indicates that compound astilbin is the major component responsible for the hypoglycemic function of E-LERW. Though the glucose level of the mice treated with E-LERW (M) was similar to those with astilbin, E-LERW (M) group had significantly higher insulin concentration than AC group, implying the protective capacity of flavonoids and polyphenols present in the extract on the islet β-cells. Flavonoids were able to increase the numbers of islets and β-cells, restore the pancreatic tissues impaired by alloxan, decrease β-cell apoptosis, and activate insulin receptors, which resulted in the increase of insulin secretion [31]. The underlying mechanisms for flavonoids and polyphenols to preserve β-cells include the blocking of NF-kappa B signaling, activation of the PI3K/Akt pathway, as well as the release decrease of nitric oxide (NO) and reactive oxygen species (ROS) [32]. Alloxan injections led to hyperglycemia accompanied with significant weight loss, while food intake, water intake, and excretion amount increased dramatically (Table 2). The phenomena were in accordance with what Leme et al. reported [33]. Administration of astilbin and E-LERW (H) and (M) significantly alleviated diabetes-induced weight loss, food intake, water intake, and excretion amount ($p \leq 0.05$). Compared to astilbin, E-LERW (M) reduced water intake and excretion more efficiently ($p \leq 0.05$). Hyperglycemia also damaged the liver and kidney and made the two organs swell. E-LERW protected the liver and kidney by remarkedly diminishing the organ indexes. The group with E-LERW (M) had lower organ indexes of liver and kidney compared to the astilbin group, exhibiting more potent protective power on organs. This function is associated with the strong antioxidant activity of E-LERW [34]. Hyperglycemia mellitus is related to high yield of ROS, which may cause DNA oxidation. High levels of genomic damage led to liver and renal failure [35,36]. Antioxidant phytochemicals such as phenolic compounds and flavonoids help to scavenge ROS and protect the organs from radical related impairment [34]. The antioxidant components could also enhance the activity of antioxidant enzymes such as glutathione peroxidase and catalase [37] and lower the elevated levels of malondialdehyde (MDA) and NO in streptozotocin (STZ)-induced diabetic rats [38]. In addition, polyphenols and flavonoids were able to hinder the activity change of hepatic enzymes, for example, alanine aminotransferase (ALT), aspartate aminotransferase (AST) and lactate dehydrogenase (LDH), and attenuated the hepatic toxicity caused by STZ [39]. ## 5. Conclusions Astilbin was the principal component of E-LERW. Compared to astilbin, E-LERW presented significantly higher activity in scavenging radicals, FRAP, and inhibiting the oxidation of lipid membrane. E-LERW also displayed stronger affinity with α-glucosidase with more powerful inhibitory strength on the enzyme, which was evidenced by Lineweaver–Burk curves. After the alloxan injection, the plasma levels of FBG, oral glucose tolerance, TG, TC, and LDL of the mice increased to 4.18, 3.93, 2.04, 2.84, and 4.63 times the normal levels, respectively. Meanwhile, insulin secretion and HDL levels were reduced to $4.72\%$ and $38.97\%$ of normal mice. Alloxan also impaired the organs, causing the indexes of the liver and kidney to elevate $33\%$ and $67\%$, respectively. Treatment with E-LERW (M) and (H) can efficiently lower the increased glucose and lipid levels induced by alloxan and boost the levels of insulin and HDL. In addition, E-LERW alleviated hyperglycemia-induced organ damage and decreased the liver and kidney indexes. Compared to astilbin control, E-LERW did not show more potent capacity in lowering glucose level and oral glucose tolerance, but presented a more efficient ability in preventing weight loss, reducing food intake, water intake, and excretion. Moreover, E-LERW was superior to astilbin in enhancing insulin secretion and protecting organs. The study indicates that E-LERW may be a promising functional ingredient in alleviating symptoms of diabetic patients.
# Associations of Clusters of Cardiovascular Risk Factors with Insulin Resistance and Β-Cell Functioning in a Working-Age Diabetic-Free Population in Kazakhstan ## Abstract Cardiovascular risk factors aggregate in determined individuals. Patients with Type 2 diabetes mellitus (T2DM) have higher cardiovascular This study aimed to investigate insulinresistance (IR) and β-cell function using the homeostasis model assessment (HOMA) indexes in a general Kazakh population and determine the effect he effect that cardiovascular factors may have on those indexes. We conducted a cross-sectional study among employees of the Khoja Akhmet Yassawi International Kazakh-Turkish University (Turkistan, Kazakhstan) aged between 27 and 69 years. Sociodemographic variables, anthropometric measurements (body mass, height, waist circumference, hip circumference), and blood pressure were obtained. Fasting blood samples were collected to measure insulin, glucose, total cholesterol (TC), triglycerides (TG), and high- (HDL) andlow-density lipoprotein (LDL) levels. Oral glucose tolerance tests were performed. Hierarchical and K-means cluster analyses were obtained. The final sample was composed of 427 participants. Spearmen correlation analysis showed that cardiovascular parameters were statistically associated with HOMA-β ($p \leq 0.001$) and not with HOMA IR. Participants were aggregated into the three clusters where the cluster with a higher age and cardiovascular risk revealed deficient β-cell functioning, but not IR ($p \leq 0.000$ and $$p \leq 0.982$$). Common and easy to obtain biochemical and anthropometric measurements capturing relevant cardiovascular risk factors have been demonstrated to be associated with significant deficiency in insulin secretion. Although further longitudinal studies of the incidence of T2DM are needed, this study highlights that cardiovascular profiling has a significant role not just for risk stratification of patients for cardiovascular prevention but also for targeted vigilant glucose monitoring. ## 1. Introduction Cardiovascular risk factors cluster and aggregate within individuals [1]. Clustering of risk factors has been associated with a higher risk of cardiovascular disease. Those risk factors, high blood pressure, abnormal cholesterol [2], high triglycerides [3,4], obesity, lack of physical activity [5], or smoking [6] have also been identified to be associated with a higher incidence of Type 2 diabetes mellitus (T2DM) [7]. Patients with T2DM have a high prevalence of prior higher cardiovascular risk [8]. Diabetes mellitus (DM) incidence is growing globally [9] as well as in Kazakhstan [10]. Diabetes is a complex and heterogeneous disease, more complex than the classification in Type 1 and Type 2 suggest [11]. Biological and clinical implications of putative subtypes of DM require further investigation [12]. Recent novel classification based on clusters attempts to make a refined classification of adult-onset diabetes subgroups and their association with a specific risk of complications, with the aim to provide a useful tool for individualized treatment [13]. Progression differences and complication incidences that are linked with differences in DM subtypes have also been explored to determine possible subtypes of patients that are at risk of developing diabetes [14,15]. Although insulin resistance (IR) and pancreatic β-cell dysfunction are the fundamental features in the development of Type 2 diabetes (T2DM), the pathogenesis of T2DM is still unclear. Both peripheral IR and insufficient insulin release from pancreatic islet β-cells induce hyperglycemia and, therefore, increase insulin demand. IR may be defined as a subnormal glucose response to endogenous and/or exogenous insulin. It most commonly occurs in association with obesity but may result from multiple other underlying causes, both cell-extrinsic factors. This includes circulating or paracrine molecules (such as hormones, cytokines, lipids, and metabolites) that are released from a cell or tissue other than the target cell/tissue, or absorbed by the intestine from the diet or microbiome action, and cell-intrinsic factors that are most likely due to genetic or epigenetic effects, but may or may not be in the insulin signaling pathway itself [16]. The insulin receptor is a transmembrane protein that is part of the RTK (receptors of tyrosine kinase), which exists as covalently bound receptor dimers at the surface of molecules. This receptor plays crucial roles in all the important functions of cell growth and its metabolism, as well as being related to DM, and thus has been considered a novel therapeutic target. An in-depth analysis of the insulin receptor would help develop an understanding of the regulation of cellular pathways and contribute to the development of novel drugs for T2DM [17]. However, the role and sequence of those inherently complex processes, IR and β-cell dysfunction, and their interrelation for triggering the pathogenesis of T2DM are also undefined [18]. Understanding how these multi-layered molecular networks modulate insulin action and metabolism in different tissues will open new avenues for therapy and prevention of T2DM. The homeostasis model assessment (HOMA) is derived from a mathematical assessment of the balance between hepatic glucose output and insulin secretion from fasting levels of glucose and insulin [19]. HOMA indexes provide valid estimates of insulin resistance (HOMA-IR) and of β-cell function (HOMA-β). The HOMA index calculation requires only a single measurement of insulin and fasting glucose and is thus considered a valid alternative. Well-conducted prospective studies have determined the predictive validity of both measures to identify patients that are at risk of T2DM developing [20,21,22,23,24]. Patients’ ethnic backgrounds have been associated with differences in the incidence and progression of T2DM [25]. The significant contribution in Asian populations of β-cell dysfunction in the incidence of T2DM, compared to *Caucasians is* becoming recognized. These pathophysiological differences may have an important impact on therapeutic approaches [26]. Asians may have especially vulnerable β-cells, despite relatively good insulin sensitivity, and be unable to increase insulin secretion further if there is even a slight decrease in insulin sensitivity [27]. Kazakhstan is an ethnically diverse Central Asian country, and its genetic characteristics may hold an intermediate position between European and Eastern Asian populations [28]. In Turkistan, and quite different from other regions of the country, the second most frequent ethnic group after ethnic Kazakhs are Uzbeks [29], with whom Kazakhs possibly share more genetic similarities than ethnic Russians, who in the rest of the country are the second most frequent ethnic group. A previous study has shown that the South Kazakhstan region, where Turkistan belongs to, had the highest proportion of undiagnosed diabetes cases [11]. The objective of this study is to investigate IR and β-cell function in a general Kazakh population and determine the effect that cardiovascular factors may have on those indexes. ## 2. Materials and Methods The study was conducted at the Clinical Diagnostic Center of the Khoja Akhmet Yassawi International Kazakh-Turkish University (Turkistan, Kazakhstan) between 2019 and 2020. The study population consisted of employees of the Khoja Akhmet Yassawi International Kazakh-Turkish University. The inclusion criteria were age between 27 and 69 years and written informed consent to participate in the study. The exclusion criteria were the presence of already diagnosed kidney disease or diabetes or who were diagnosed with diabetes with the blood tests that were analyzed in this work. Data on study participants were collected in a patient survey card that contained a summary of the study, a written voluntary informed consent form, passport, and demographic data, questionnaires on lifestyles, as well as anthropometric and laboratory studies. The Fagerstrom test was used as a questionnaire to determine smoking status, and the Alcohol Use Disorders Identification Test (AUDIT) questionnaire was used to identify the alcohol consumption information. An anthropometric study was conducted for determining the height, and weight for which BMI was calculated. Height was measured by a stadiometer, in which the study participants stood straight, without outerwear and shoes, heels, buttocks, and shoulders were in contact with the vertical plane of the stadiometer. The patients’ heads were kept in the “Frankfurt plane” where the lower boundaries of the orbits were in the same horizontal plane as the external auditory space. When holding their breath on inspiration, the stadiometer plate was lowered to the head of the patient, after which the subject departed. After taking three measurements, the average growth index was determined with an accuracy of 0.1 cm. Body weight was measured on electronic scales. After turning on the scale display to check the performance, when 0.00 g appeared, the participants were asked to stand on the scale. At the same time, shoes, outerwear, and heavy items in pockets (mobile phones, wallets, etc.) were removed. Study participants stood in the center of the scales with their arms freely at their sides. At the same time, the patients looked straight and remained motionless. After three measurements, the mean body weight was recorded to the nearest 0.1 kg. Based on the results of measuring height and body weight, BMI was determined by the formula: weight (kg)/height in m2. Waist circumference (WC) was measured while standing, using a soft centimeter tape with an accuracy of 0.1 cm. WC was measured after normal expiration in the middle between the lower rib and the upper part of the iliac crest. According to the measurement of WC, the presence of abdominal obesity (AO) was determined according to the International Diabetes Federation criterion [2005]. A WC of more than 94 cm in men and 80 cm in women was taken as AO. Measurement of hip circumference (HC) was carried out with a centimeter tape, in the standing position, on the most protruding part of the gluteal region above the large trochanters, the result was determined with an accuracy of 0.1 cm. Laboratory methods included the determination of fasting glucose levels, after a 2-h oral glucose tolerance test (OGTT), triglycerides (TG), total cholesterol (TC), high-density lipoprotein (HDL), and low-density lipoprotein (LDL). Blood sampling was carried out from the cubital vein after a 12-h fast. OGTT was performed with 75 g glucose solution, in which the plasma glucose level was measured after 0 and 120 min. For prediabetes, fasting glucose was taken as 6.1–6.9 mmol/L, after OGTT—7.8–11.1 mmol/L (WHO). Biochemical studies were determined in a biochemical analyzer Cobas Integra-400 from Roche (Basel, Switzerland). The listed laboratory studies were carried out in the laboratory of the Clinical Diagnostic Center of Khoja Akhmet Yassawi International Kazakh-Turkish University. HOMA-IR and HOMA-β were calculated and divided into terciles and in 2 categories, namely IR and Poor β-cell function [30]. HOMA models were calculated as HOMA-IR = fasting insulin (lU/mL) × fasting glucose (mmol/L)]/22.5, and HOMA-β = [20 × fasting insulin (lU/mL)]/[fasting glucose (mmol/L) − 3.5]. IR was defined as values HOMA_IR ≥ 2.5 and Poor β-cell function when HOMA-β ≤ 50. Correlation analysis was conducted to analyze the possible associations between the different cardiovascular risk factors and glucose metabolism variables. Cluster analyses were conducted to identify individuals with aggregation of cardiovascular risk factors: hierarchical and k-means. To visualize clustering of individuals, first hierarchical analysis with the Wald method was obtained to create a dendrogram to visually determine the reasonable number of clusters. Second, K-means clusters were finally developed to find the separation of cases based on cardiovascular risk factors as descriptors. SPSS 29.0 statistical software was used for the analyses. This study was approved by the Commission on Clinical Ethics of the Faculty of Medicine of Khoja Akhmet Yassawi International Kazakh-Turkish University. Before attending the study, the participants were provided with personal explanations regarding the purpose and method of the study, as well as information regarding the processing of the results. Written consent was given by all participants. ## 3. Results Data were initially available from 632 participants, but data to calculate HOMA-IR and HOMA-β were available only for 488 participants. Cases with fasting blood glucose or OGTT compatible with a diagnosis of diabetes were eliminated. The total sample was composed of 427 subjects. The basal characteristics of cases are depicted in Table 1. The high obesity prevalence and elevated BMI is of note. As the variables did not show normal distributions, Spearman non-parametric correlation analysis was conducted. Table 2 shows the Spearman correlation between the different quantitative variables related either to cardiovascular risk factors or glucose metabolism. All the cardiovascular parameters were inversely statistically associated with HOMA-β and none of them with HOMA IR. Figure 1 shows the dendrogram of hierarchical clustering demonstrating that the separation of subjects into three clusters is well depicted. Table 3 reflects the values of cardiovascular risk factors of the three proposed clusters created using the K-means method. Table 4 reveals the glucose metabolism characteristics of participants that were aggregated into the three clusters. Significant associations were identified for beta-cell functioning, but not for IR. Supplementary Tables S1–S4 show the distribution of cardiovascular risk factors by HOMA terciles as well as IR and Poor β-cell functioning. ## 4. Discussion The findings of this study, combining different analytical methods to identify the possible relationship between cardiovascular risk factors and homeostasis indexes that reflect susceptibility to T2DM, suggest the existence of a significant association between cardio-metabolic alterations and β-cell function, as measured by HOMA-β, while not such an association with IR. Age also showed a strong independent effect on β-cell dysfunction. Another relevant finding of this study is the aggregation of cardiovascular risk factors in certain groups of this population as well as the association of higher cardiovascular risk with age and with β-cell deficiency. Is also relevant to mention the elevated proportion of overweight and obese participants as well as abdominal obesity in this population. HOMA-IR and HOMA-β are widely accepted surrogate measures of IR and β-cell dysfunction in clinical and epidemiological studies [31], but the interpretation and extrapolation of the current findings for its application for clinical practice or public health decision-making should be cautious; showing more or less deteriorated glucose homeostasis indexes reflecting either IR or poor β-cell functioning should not be immediately associated with a higher risk of incidence of T2DM. The most common understanding of the T2DM pathogenic process is that IR is the primary glucose homeostasis abnormality, with β-cell dysfunction being a later manifestation when β-cells no longer sustain sufficient insulin secretion and became ‘exhausted’. However, “primary” β-cell dysfunction as an independent abnormality in the early phases of the development of dysglycemia has also been suggested [32,33]. Different mechanisms (glucotoxicity, lipotoxicity, oxidative stress, endoplasmic reticulum stress, inflammatory stress, amyloid formation, or decreased incretins) have been suggested for β-cell death [34,35]. The coexistence of adverse cardiovascular risk profiles, including overweight and obesity, high blood pressure levels, and lipid alterations in certain individuals, may create a “hostile” metabolic environment that, when acting in concert with age, may be associated with those factors and reduce functional β-cell mass and increase the risk for T2DM [36,37]. Cluster 1 that was identified in this study included $37\%$ of the sample that were analyzed and revealed advanced age and the worst cardiovascular risk in terms of blood pressure and lipid profiles, a high prevalence of obesity, and a significantly poorer β-cell function. In contrast, the cluster with lower age and most favorable cardiovascular risk showed the best pancreatic β-cell function. These data did not show an association between cardiovascular risk factors and IR. Obesity has been classically considered a hallmark of IR [38]. We did not find this association in this study. In obesity, adipose tissue releases increased amounts of non-esterified fatty acids, glycerol, hormones, pro-inflammatory cytokines, and other factors that are involved in the development of IR [39]. However, it is only when IR is accompanied by dysfunction and failure of pancreatic β-cells to control blood glucose levels that this results in T2DM. Our results indicate an association between obesity and reduced β-cell function [40]. Obesity may be linked to pancreatic fat infiltration leading to impaired β-cell function, and the development of T2DM [41,42]. Excess cholesterol may have a direct pancreatic β-cell lipotoxicity, contributing as an underlying factor in the progression of T2DM [43]. Cholesterol is important for β-cell function and survival, but it can cause β-cell loss if allowed to accumulate in the cells in an unregulated manner [44]. Cholesterol excess impacts several steps of the metabolic machinery that are involved in glucose-stimulated insulin release localized at the endoplasmic reticulum, mitochondria, and the cell membrane [45,46]. This study adds to the growing body of literature that suggests that obesity and lipid alteration contribute to β-cell dysfunction [47]. Aging is one of the most important factors that is implicated in the major changes that are associated with deteriorated glucose metabolism through β-cell function [48,49], and appears to be independent of IR, BMI, and waist circumference [50]. The cause of this age-dependent functional decline is not known [51]. It is also not known whether this effect is mediated by a reduction in incretin secretion or not or may be associated with an aging-related β-cell resistance to the incretin effect, thus needing an increased release of incretin hormones, glucagon-like peptide-1 and gastric inhibitory polypeptide, to stimulate adequate insulin secretion in response to the glucose load [52,53]. A better understanding of all the factors that alter the proper regulation of glucose metabolism at advanced ages will facilitate the design of therapies that allow for better management of glycemia [54]. The study has some limitations. First, this is a selected working-age population from one company. *The* generalizability of these data may be limited. The cross-sectional nature of the study design prevents establishing causality in the direction of the associations identified. Cut-off points for HOMA-IR and HOMA-β are not standardized; other cut-off points may have rendered different results. A lack of a standardized universal insulin assays limit their use for routine assessment of insulin resistance in the clinical setting and may have affected our results. This analysis has separated the cases into three clusters but having determined another number of clusters may have rendered different results. The same limitation applies for our cut-off point of IR or poor-β-cell function. Also, the ethnic or genetic factors that may influence the glucose homeostasis indexes may be valid only for specific populations where they have been obtained. No data regarding two relevant variables, use of drugs or physical activity were available for these analyses. Lastly, this study does not aim to provide mechanistic explanations of the possible associations that may have been identified by analyzing these data. ## 5. Conclusions T2DM is a complex and multifactorial global health problem that affects millions of people worldwide, has a significant impact on their quality of life, and results in grave consequences for healthcare systems. In T2DM, deficiency of β-cell function, primary or secondary to peripherally developed IR, is a paramount factor leading to dysregulated blood glucose and long-lasting hyperglycemia. The results from this work indicate that common and easy-to-obtain biochemical and anthropometric measurements capturing relevant cardiovascular risk factors are associated with significant β-cell-deficient insulin secretion. Although further longitudinal studies of the incidence of T2DM are needed, this study highlights that cardiovascular profiling has a significant role, not just for risk stratification of patients for cardiovascular prevention, but also for targeted vigilant glucose monitoring.
# Analysis of PTPN22 −1123 G>C, +788 G>A and +1858 C>T Polymorphisms in Patients with Primary Sjögren’s Syndrome ## Abstract Background: Primary Sjögren’s syndrome (pSS) is an autoimmune exocrinopathy characterized by lymphocytic infiltration, glandular dysfunction and systemic manifestations. Lyp protein is a negative regulator of the T cell receptor encoded by the tyrosine phosphatase nonreceptor-type 22 (PTPN22) gene. Multiple single-nucleotide polymorphisms (SNPs) in the PTPN22 gene have been associated with susceptibility to autoimmune diseases. This study aimed to investigate the association of PTPN22 SNPs rs2488457 (−1123 G>C), rs33996649 (+788 G>A), rs2476601 (+1858 C>T) with pSS susceptibility in Mexican mestizo subjects. Methods: One hundred fifty pSS patients and 180 healthy controls (HCs) were included. Genotypes of PTPN22 SNPs were identified by PCR-RFLP. PTPN22 expression was evaluated through RT–PCR analysis. Serum anti-SSA/Ro and anti-SSB/La levels were measured using an ELISA kit. Results: Allele and genotype frequencies for all SNPs studied were similar in both groups ($p \leq 0.05$). pSS patients showed 17-fold higher expression of PTNP22 than HCs, and mRNA levels correlated with SSDAI score (r2 = 0.499, $$p \leq 0.008$$) and levels of anti-SSA/Ro and anti-SSB/La autoantibodies (r2 = 0.200, $$p \leq 0.03$$ and r2 = 0.175, $$p \leq 0.04$$, respectively). Positive anti-SSA/Ro pSS patients expressed higher PTPN22 mRNA levels ($$p \leq 0.008$$), with high focus scores by histopathology ($$p \leq 0.02$$). Moreover, PTPN22 expression had high diagnostic accuracy in pSS patients, with an AUC = 0.985. Conclusions: Our findings demonstrate that the PTPN22 SNPs rs2488457 (−1123 G>C), rs33996649 (+788 G>A) and rs2476601 (+1858 C>T) are not associated with the disease susceptibility in the western Mexican population. Additionally, PTPN22 expression may be helpful as a diagnostic biomarker in pSS. ## 1. Introduction Primary Sjögren’s syndrome (pSS) is an autoimmune disease characterized by lymphocyte infiltration to lachrymal and salivary glands and impaired secretory activity, leading to the most important manifestations of the disease, keratoconjunctivitis sicca and xerostomia [1]. The etiology of this disease is incompletely understood; however, a key element in the pathogenesis is T and B lymphocyte hyperactivity, leading to autoantibody production mainly against ribonucleoproteins (SSA/Ro and SSB/La) and consequent presence of hypergammaglobulinemia [2,3]. It has been suggested that pSS is a complex and multifactorial disease, with genetic, environmental and hormonal factors involved in the disease pathogenesis. The protein tyrosine phosphatase nonreceptor type 22 (PTPN22) gene encodes the cytoplasmic protein lymphoid tyrosine phosphatase protein (Lyp), a potent downregulator of T cells, by inhibiting signaling through dephosphorylation of several substrates [4]. PTPN22 is involved in calibrating the T cell activation threshold and terminating TCR signaling [5]. Diverse case-control studies have examined the potential contribution of PTPN22 SNPs and their haplotypes to susceptibility to different autoimmune diseases (AIDs); however, results are inconsistent, in part because of ethnic and racial differences [6,7,8,9]. For example, rs2488457 (−1123 C) has been associated with type 1 diabetes mellitus in the Korean population [10]. In the Chinese population, rs2488457 is associated with rheumatoid arthritis (RA) [11], latent autoimmune diabetes in adults [12] and ulcerative colitis (UC) [13], whereas it is reported to be associated with less risk of systemic lupus erythematosus (SLE) in the Mexican population [14]. In addition, Muñoz-Valle et al. found an association between rs2488457 and lower levels of anti-citrullinated antibodies in RA patients [15]. The SNP rs33996649 (+788 G>A) is located in region encoding the catalytic domain of Lyp and represents a change in arginine (R) to glutamine (Q) (R263Q). This amino acid alteration leads to loss of function through reduced phosphatase activity [7]. rs33996649GA has also been related to protection against autoimmune diseases in European and American populations [16,17]. Another functional SNP is rs2476601 (+1858 C>T), involving substitution of arginine for tryptophan at codon 620 (R620 W) in the first proline-rich domain (P1) of Lyp. This variation alters the Lyp/C-Src tyrosine kinase interaction domain and results in a gain of function Lyp (increased phosphatase activity) that inhibits TCR signaling [16]. This polymorphism has been related to SLE in North America [18], RA in Mexico [19], and pSS in Colombia [20]. In the present case-control study, we investigated whether there is an association between PTPN22 polymorphisms, their haplotypes and PTPN22 mRNA expression and susceptibility to pSS in a Mexican population. ## 2.1. Patients and Healthy Controls One hundred eighty healthy controls and one hundred fifty pSS patients were included in the present study. The pSS patients were classified according to the 2016 American College of Rheumatology (ACR) and European League Against Rheumatism (EURLAR) classification criteria for pSS [21]. The sample size was calculated according to the formula n=[Zα2p^q^+Zβp1q1+p0q0]2(p1−p0)2, and the minimum number of alleles was $$n = 283$$, based on the frequencies for PTN22 +1858C>T gene polymorphism previously published in Latin-American pSS patients [20]. This study was conducted in the Hospital General de Occidente, México, and Instituto de Investigación en Ciencias Biomédicas, Universidad de Guadalajara, México. All participants were born in western Mexico with a minimum of third-generation ancestry and a Spanish-derived last name [22]. We excluded HCs with a family history of autoimmune diseases. At the time of inclusion, the pSS patients were evaluated with Sjogrën’s Syndrome Disease Activity Index (SSDAI) and Sjogrën’s Syndrome Disease Index (SSDDI) [23]. All study subjects signed informed consent. The institutional ethics and research committees approved the study under approval number: $\frac{449}{16.}$ ## 2.2. Genotyping of rs2488457 −1123 G>C, rs33996649 +788 G>A and rs2476601 +1858 C>T Polymorphisms Peripheral blood was collected from pSS patients and HCs. Genomic DNA (gDNA) extraction was performed using Miller’s technique [24]. We used polymerase chain reaction (PCR) to identify rs2488457 (−1123 G>C), rs33996649, (+788 G>A), and rs2476601 (+1858 C>T) genotypes. The primers, enzymes, and digestive products to evaluate the SNP genotypes in our study are provided in Table 1. The forward primer for rs2488457 (−1123 G>C) contains a recognition site for the endonuclease Sac1 (GAGCTxC) with an A>G substitution (underlined) [14,25]. PCR was carried out in a final volume of 10 µL including 1× of 10× supplied buffer enzyme, 4 mM MgCl2, 2.5 mM of each dNTP, 3 mM of each primer, 0.04 units of Taq DNA polymerase (Invitrogen Life Technologies, Carlsbad, CA, USA) and 100 ng/μL of gDNA. The amplification protocol was as follows: initial denaturalization at 95 °C for 3 min, followed by 29 cycles of 94 °C for 30 s, 67 °C for 30 s and 72 °C for 30 s with a final extension of 72 °C for 3 min (Thermal cycler TechNet TC-5000, Cole-Palmer, Beacon Rode, ST, UK). The PCR products were digested with 3 U of SacI (New England Biolabs, Ipswich, MA, USA) at 37 °C for 3 h. The restriction fragments were assessed by $6\%$ polyacrylamide electrophoresis and stained with $2\%$ AgNO3. The products after digestion with SacI are shown in Table 1. For rs33996649 (+788 G>A), PCR was carried out in a final volume of 10 µL containing 1× of supplied 10× buffer enzyme, 2.5 mM of each dNTP, 3 mM of each primer, 0.2 units of Taq DNA polymerase (DONGCHEN Biotech, Guangdong, China) and 100 ng/μL of gDNA. The amplification protocol was as follows: initial denaturation at 95 °C for 5 min, followed by 35 cycles of 95 °C for 40 s, 53 °C for 40 s, and 72 °C for 40 s, with a final extension of 72 °C for 5 min (Thermal cycler TechNet TC-5000, Cole-Palmer, Beacon Rode, ST, UK). The PCR product was digested with 3 U of MspI (New England Biolabs, Ipswich MA, USA) at 37 °C for 3 h, and the restriction fragments were observed on a $6\%$ acrylamide gel and stained with $2\%$ AgNO3. Table 1 show digestion products with MspI. The PCR mixture for rs2476601 (+1858 C>T) was the same as for rs2488457 (−1123 G>C). The thermal cycling conditions were as follows: initial denaturation at 95 °C for 3 min, 33 cycles of denaturation at 94 °C for 30 s, annealing at 56 °C for 30 s and extension at 72°. The products were digested with 3 U of XcmI (New England Biolabs, Ipswich, MA, USA) at 37 °C for 3 h. The restriction fragments were separated by $6\%$ gel polyacrylamide electrophoresis and stained with $2\%$ AgNO3. The products after digestion with XcmI are shown in Table 1. ## 2.3. RNA Extraction and Reverse Transcription Total cellular RNA was extracted from peripheral blood mononuclear cells (PMBCs) using TRIzol reagent (Invitrogen Life Technologies, Carlsbad, CA, USA) according to the manufacturer’s protocol. Repeated phenol–chloroform extraction was performed for the RNA samples, which were subjected to isolation using the Chomiczyki and Sacchi method [26]. The $\frac{260}{280}$ ratio was used to provide an estimate of purity. Low-quality and degraded RNA samples were excluded. According to the reverse transcriptase protocol (Promega, Madison WI, USA), Oligo-Dt primers and reverse transcriptase (MMLV) were used to synthesize complementary DNA (cDNA) from 1 μg of total RNA. PTPN22 mRNA expression was determined in twenty-eight pSS patients and twenty-eight HCs of different genotypes. ## 2.4. Quantitative PCR (qPCR) Quantitative real-time polymerase chain reaction (qPCR) was carried out to quantify the expression of the gene of interest. The RT–qPCR protocol followed the guidelines of Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) [27] using a Nano Light Cycler 2.0 (Roche Applied Science, Branford, CT, USA). Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) was used as a reference gene to determine relative quantification after it was shown to be stably expressed in the sample [28]. The primers and hydrolysis probes were designed with Roche Universal Probe Library (PTPN22: cat. no. 04689011001, GAPDH: probe cat. no. 05190541001). All samples were run as duplicates. After validation of PCR efficiency for both genes, the data obtained were analyzed. A comparative threshold cycle (Cq) method with a cutoff of 40 cycles was used to determine the PTPN22 mRNA copy number relative to GAPDH, and data are shown based on the 2−ΔΔCq method [29] and 2−ΔCq method [30]. ## 2.5. Anti-SSA/Ro and Anti-SSB/La Serum Level Determination Anti-SSA/Ro and anti-SSB/*La serum* levels were determined from serum samples stored at −80 °C until measurement using a commercially available ELISA kit (cat. no. ORG. 506 and ORG. 508, respectively, ORGENTEC Diagnostika GmbH Carl-Zeiss-Straße 49, 55129 Mainz, Germany) with a sensitivity of 1 U/mL and 0–200 U/mL standard range. A Multiskan GO spectrophotometer (Thermo Fisher Scientific Oy, Ratastie, PO, Finland) was employed to obtain the optical density of all samples. The concentration was calculated based on a standard curve, and the results are reported as U/mL. According to the ORGENTEC ELISA kit protocol, samples with values of >25 U/mL were considered positive. ## 2.6. Statistical Analysis Concerning the evaluation of PTPN22 gene polymorphisms, Hardy–*Weinberg equilibrium* (HWE) was tested using the χ2 test or Fisher’s exact test. Genotypic and allelic frequencies were compared by a 2 × 2 contingency table, and a χ2 test was performed. The Lewontin normalized coefficient D0 was used for assessing linkage disequilibrium (LD) between pairs or markers. SHEsis software was applied for haplotype analysis [31], and haplotypes with a low frequency (<$1\%$) were not included. Student’s t test, the Mann–Whitney U test, one-way ANOVA, the Kruskal–Wallis test and Dunn’s post hoc test were applied according to the data distribution. SPSS25 (IBM Corporation; Armonk, NY, USA) and GraphPad Prism 8.0 (GraphPad Software, Incorporation; La Jolla, CA, USA) software were used for all statistical analyses. Differences were considered significant at a p value < 0.05 and were corrected with Bonferroni’s method according to the case. Statistical analysis to determine the fold change in PTPN22 mRNA expression between pSS patients and HCs was performed by using the 2−ΔΔCq method, and statistically significant differences were determined through the 2−ΔCq method. Values were obtained using the following formulas: ΔCq = (CqPTPN22 average − CqGAPDH average) and ΔΔCq = (ΔCqpSS − ΔCqHC). Receiver operating characteristic (ROC) curves and the area under the ROC curve (AUC) were used to assess the performance of PTPN22 mRNA expression level as a diagnostic tool for pSS diagnosis. ## 3.1. Demographic and Clinical Characteristics One hundred fifty pSS patients were included in this study. The average age was 55 (±10) years, and all patients were female. The disease duration was 2.3 years [interquartile range (IQR) 1–5.5], and the average lymphocytic infiltration obtained from biopsies of the minor saliva gland was 2.3 (±1.7) foci in 4 mm2. Anti-SSA/Ro autoantibodies were positive in $23.3\%$ of the pSS patients and anti-SSA/La autoantibodies in $13\%$. SSDAI and SSDDI means were 3 (±1) and 1 (±1), respectively. The main clinical manifestations and treatments are shown in Table 2. ## 3.2. Genotype Distribution of PTPN22 rs2488457 (−1123 G>C), rs33996649 (+788 G>A), and rs2476601 (+1858 C>T) Polymorphisms The genotypic and allelic frequencies of the rs2488457 (−1123 G>C), rs33996649 (+788 G>A) and rs2476601 (+1858 C>T) PTPN22 polymorphisms in pSS patients and HCs and their comparison are shown in Table 3. All PTPN22 gene polymorphisms were in Hardy-Weinberg equilibrium. Overall, genotypic and allelic frequencies for rs2488457 (−1123 G>C) in the pSS patients were similar to those in HCs (GG $52\%$, GC $40.7\%$ and CC $7.3\%$ vs. GG $52.2\%$, GC $40\%$ and CC $7.8\%$, respectively), with no significant differences ($p \leq 0.05$). Similarly, for rs33996649 (+788 G>A), there were no statistically significant differences in allele and genotype frequencies between the groups (GG $96.6\%$, GA $2.7\%$ and AA $0.7\%$ vs. GG $98.3\%$ and GA $1.7\%$). Regarding rs2476601 (+1858 C>T), allele and genotype frequencies were similar in pSS patients and HCs (CC $98\%$ CT $1.3\%$ and TT $0.7\%$ vs. CC $98.9\%$, CT $1.1\%$ and TT $0\%$), with no significant differences between genotypic and allelic frequencies in pSS patients compared to HCs and a very low frequency of the T allele. ## 3.3. PTPN22 rs2488457 (−1123 G>C), rs33996649 (+788 G>A), and rs2476601 (+1858 C>T) Haplotypes rs2488457 (−1123 G>C) and rs2476601 (+1858 C>T) were found to be in medium linkage disequilibrium (LD) (D’ = 0.70). On the other hand, the loci rs33996649 (+788 G>A) did not found in linkage disequilibrium with rs2488457 (−1123 G>C) and rs2476601 (+1858 C>T). The most frequent haplotype in pSS patients and HCs was GGC ($70.7\%$ vs. $71\%$, respectively), which included the three wildtype alleles of the SNPs. CGC frequencies were similar in pSS ($26.3\%$) and HC ($27.73\%$) ($p \leq 0.05$) (Table 3). ## 3.4. PTPN22 mRNA Expression and Clinical Association PTPN22 expression was determined in 28 pSS patients and 28 HCs. The pSS patients showed 17.9-fold higher PTPN22 gene expression than the HCs (Figure 1a) ($$p \leq 0.001$$, Figure 1b). When comparing PTPN22 gene expression according to rs2488457 (−1123 G>C) genotype in the pSS group, carriers of the GC genotype showed slightly higher expression (0.51-fold more) than GG carriers; however, no significant difference was found ($p \leq 0.05$; see Figure 1c). In addition, patients with active pSS expressed 1.94-fold higher levels of PTPN22 than patients with inactive pSS (Figure 1d). Quantitative expression of PTPN22 was higher in pSS patients with active disease ($p \leq 0.05$, Figure 1e) and in those positive for anti-SSA/Ro antibodies ($$p \leq 0.006$$, Figure 1f), and a positive correlation with SSDAI was also observed (r2 = 0.499, $$p \leq 0.008$$, Figure 1g). According to damage status and SSDDI score, PTPN22 expression was similar in pSS patients (Figure 1h) but higher than that in HCs (Figure 1i, $p \leq 0.001$), with no statistical correlation (r2 = −0.096, $p \leq 0.05$, Figure 1g). Regarding clinical manifestations and autoantibody profiles, SSDAI score had a positive correlation with anti-SSA/Ro (r2 = 0.200, $$p \leq 0.03$$, Figure 2a) and anti-SSB/La (r2 = 0.175, $$p \leq 0.046$$, Figure 2b) serum levels. Additionally, a significantly higher focus score for MSG biopsies and ANA titers was found in anti-SSA/Ro-positive patients ($p \leq 0.05$, Figure 2c and Figure 2d). Patients with high SSDAI hematological domain scores showed 2.58-fold higher expression than patients with quiescent disease (Figure 2e). Furthermore, PTPN22 expression displayed an AUC = 0.98 for accurate diagnosis of pSS (Figure 2f). ## 4. Discussion pSS is a systemic autoimmune disorder characterized by focal lymphocytic infiltration into the exocrine glands, causing dry eyes and dry mouth [1]. It has been suggested that pSS etiology is complex; however, TCR dysregulation plays an important role in the pathogenesis of autoimmune diseases [32]. Lyp is a tyrosine phosphatase that regulates T cells through inhibitory signaling by dephosphorylating several substrates, including the Src family kinases Lck and Fyn, as well as ZAP-70, during TCR lymphocyte activation [4,33]. The Lyp protein is encoded by the PTPN22 gene on chromosome 1. rs2488457 (−1123 G>C), rs33996649 (+788 G>A) and rs2476601 (+1858 C>T) are functional polymorphisms of the PTPN22 gene associated with multiple inflammatory conditions, including autoimmune disorders such as pSS [7,20,33]. Our study analyzed the SNPs rs2488457 (−1123 G>C), rs33996649 (+788 G>A) and rs2476601 (+1858 C>T) in the PTPN22 gene and susceptibility to pSS development in a Mexican mestizo population. The minor C allele of rs2488457 was detected in $27.78\%$ of HCs, which is a lower proportion than the frequencies reported in the Asian population ($33\%$ to $41\%$). Nevertheless, we found a similar frequency of the rs2488457 GC genotype ($40\%$ vs. 37–$46.1\%$) and a lower percentage of the rs2488457 CC genotype (7.8 vs. 13.7–$18.1\%$) [10,11,12,13]. The distribution of the major rs33996649 G allele and the rs33996649 GG genotype are similar in the Mexican population [34], and the absence of the rs33996649 AA genotype is consistent with reports for European and Argentine populations [16,17,35,36]. Additionally, the minor allele frequency of rs2476601 T in the western Mexico population ($0.6\%$) is similar to that reported in Amerindian and African populations (<$1\%$) [7] but lower than that in Northern European populations ($15\%$) [9]. The rs2476601 (+1858CT) genotype frequency in our study was $2.2\%$, lower than in European and American populations [18]. However, the rs2476601 TT genotype was absent in the Occidental Mexican population, which is consistent with previous reports for the same population [14,15,19]. Previous studies have analyzed the distribution of all these SNPs in healthy unrelated Mexican Mestizo subjects, showing genotypic and allelic frequencies similar to those reported in our study [14,15,19,35]. *In* general, ancestry studies in Mexican mestizos from the west region (State of Jalisco), based on maternal ancestry (mtDNA haplogroups) underscore the predominance of the Native American contribution ($87\%$), followed by European ($9\%$), African ($3\%$) and Eurasian ($1\%$) contributions [37]. However, when the Mexican admixture are analyzed based on the paternal contribution (Y-STRs), the Native American contribution decrease ($28\%$), followed by African ($5\%$), while the European ($67\%$) raised [38]. rs2488457 (−1123 G>C), rs33996649 (+788 GA) and rs2476601 (+1858 C>T) were not found to be associated with an increased risk of developing pSS in the Mexican mestizo population from western Mexico. In contrast, rs2488457 (−1123 G>C) has been associated with UC, RA, and autoimmune diabetes mellitus in Asians [11,13]. The genotypic and allelic frequencies observed in west Mexican pSS patients and HCs for rs2488457 (−1123 G>C) were similar to those reported for European population and the total allelic frequencies reported in the Phase 3 of the 1000 Genomes Project [39]. Additionally, the rs2476601 T allele is associated with a risk for developing pSS in the Colombian population [20], and with RA in west [19] and central Mexican AR patients [40]. rs33996649 (+788 GA) has been reported to have a protective role against SLE and RA in European populations [16,36]. This is the first study to investigate three SNPs, rs2488457 (−1123 G>C), rs33996649 (+788 GA) and rs2476601 (+1858 C>T), in the PTNP22 gene. The haplotype analysis showed a medium LD between rs2488457 (−1123 G>C) and rs2476601 (+1858 C>T) but not LD was found with the rs33996649 (+788 GA), and the haplotype frequencies were similar in both, pSS and HCs. Different studies evaluating PTPN22 haplotypes with polymorphic alleles have described an increased risk of developing RA in Norway and western Mexican populations [19,41]. In addition, PTPN22 gene polymorphisms have been associated with higher gene expression in RA and UC [13,35]. In this study, the pSS patients showed 17-fold higher mRNA expression than HCs. In another study by our group, patients with SLE showed similar PTPN22 mRNA expression levels as controls [14]. *In* general, polymorphisms might explain higher gene expression. Lyp1 is mainly present in the cytoplasm of active T lymphocytes, whereas Lyp2 is found in the nucleus, perinuclear membrane, and cytoplasm of inactive peripheral T lymphocytes [42]. The third isoform reported, named PTPN22.6, lacks the catalytic site and is reported to be predominant in RA patient carriers of the rs2476601 (+1858 C>T) R620W functional variant. PTPN22.6 leads to higher nuclear factor of activated T cells (NFAT) expression and elevated IL-2 levels, with uncontrolled autoreactive T cell clonal expansion, by exerting a dominant negative effect over Lyp 1. Additionally, expression of PTPN22.6 correlates with RA activity [43]. Similar to Chang et al., we found an association between PTPN22 mRNA expression and clinimetric indices and autoantibody profiles in RA patients, which is the most important finding of our study. T cell receptor dysregulation is a key factor in glandular tissue damage: it is associated with a higher concentration of inflammatory cytokines [2] and promotes B cell activation, class switching, the T cell-dependent autoantibody response and germinal center (GC) expansion [44]. GC expansion has also been associated with higher production of pSS autoantibodies, such as anti-SSA/Ro, anti-SSB/La, antinuclear antibodies, and rheumatoid factor. On the other hand, murine model studies have demonstrated that PTPN22 loss of function in myeloid cells results in an augmented inflammatory effector phase of autoimmune disease and GC generation by influencing the number and activity of Th follicular cells [44,45]. The presence of anti-SSA/Ro and anti-SSB/La correlates with severe lymphocytic infiltration of the salivary glands, a higher prevalence of extraglandular manifestations and recurrent swelling of the parotid glands [46]. In our patients with pSS, we observed a clinical association between pSS activity and damage indices, autoantibodies, and MSG infiltration. Anti-SSA/Ro and histopathological MSG focus scores are the only two diagnostic tools used to classify pSS patients. Therefore, we evaluated PTPN22 gene expression as a biomarker. The area under the curve of PTNP22 expression was 0.985 (the cutoff suggested was >60 relative expression units, with $100\%$ sensitivity, $91.67\%$ specificity, and likelihood ratio 12; data not shown), demonstrating high diagnostic performance for pSS, which is similar to the accuracy of anti-SSA/Ro autoantibody diagnosis [47]. In populations such as ours, with a low frequency of anti-SSA/Ro ($25\%$) antibody positivity, PTPN22 expression may be helpful as a molecular biomarker for pSS diagnosis. This study has important limitations as small sample size, selective recruiting of the western Mexican population, lack of inclusion of patients with the homozygous rs2488457 (−1123 CC) genotype for analysis of PTPN22 mRNA expression, lack of inclusion of control disease for comparative analysis of PTPN22 mRNA, as well as heterogeneity in the treatment of pSS, which may reflect differences in PTPN22 mRNA expression. Moreover, the PTPN22.6 isoform was not evaluated. ## 5. Conclusions In summary, the rs2488457 (−1123 G>C), rs33996649 (+788 G>A) and rs2476601 (+1858 C>T) polymorphisms of the PTNP22 gene are not associated with the risk susceptibility of pSS in the Mexican population. We propose that PTPN22 expression could be used as a molecular biomarker in pSS, as PTNP22 expression is associated with autoantibody presence, disease activity index, and extraglandular manifestations. However, further studies are required to analyze interacting epigenetic factors, as well as the relationship between Lyp and the local environment of the germinal centers on exocrine glands.
# Digital Health Literacy and Person-Centred Care: Co-Creation of a Massive Open Online Course for Women with Breast Cancer ## Abstract The diagnosis of breast cancer (BC) can make the affected person vulnerable to suffering the possible consequences of the use of low-quality health information. Massive open online courses (MOOCs) may be a useful and efficient resource to improve digital health literacy and person-centred care in this population. The aim of this study is to co-create a MOOC for women with BC, using a modified design approach based on patients’ experience. Co-creation was divided into three sequential phases: exploratory, development and evaluation. Seventeen women in any stage of BC and two healthcare professionals participated. In the exploratory phase, a patient journey map was carried out and empowerment needs related to emotional management strategies and self-care guidelines were identified, as well as information needs related to understanding medical terminology. In the development phase, participants designed the structure and contents of the MOOC through a Moodle platform. A MOOC with five units was developed. In the evaluation phase, participants strongly agreed that their participation was useful for the MOOC’s development and participating in the co-creation process made the content more relevant to them (experience in the co-creation); most of the participants positively evaluated the content or interface of the MOOC (acceptability pilot). Educational interventions designed by women with BC is a viable strategy to generate higher-quality, useful resources for this population. ## 1. Introduction Breast cancer (BC) represents one of the most frequently diagnosed cancers in women worldwide [1]. According to the most recent data from the European Cancer Information System (ECIS), there were approximately 355,460 new cases of BC diagnosed across Europe in 2020, with 34,088 of those cases occurring in Spain [2]. However, thanks to early diagnosis and therapeutic advances, BC survival has increased in recent years [3], with a survival rate of around $85\%$ [4]. Increasing prevention and treatment for BC have lowered mortality, but the diagnosis and treatment continue to have a significant impact in many areas of patients’ lives (physical, emotional, cognitive and social) [5]. The diagnosis of BC, which in most cases necessitates an effort to adjust and adapt to the new situation [5,6], is typically perceived as a traumatic event with a significant impact on the health-related quality of life of the women who suffer from it, making them more vulnerable to the potential consequences of using biased or low-quality health information. Person-centred care (PCC) is defined as the provision of care that considers a patient’s clinical needs, life circumstances, and personal values and preferences [7]. A central component of PCC is to ensure quality communication between patients and healthcare professionals, with the aim of fostering the process of shared decision making (SDM) [8]. SDM-based interventions, such as patient decision aids (PtDAs), have been shown to improve patients’ knowledge about available treatments and their benefits/risks, decisional conflict and other decisional process variables [9]. There is a need to develop interventions to increase knowledge about PCC and digital health literacy (DHL) [10,11], particularly in chronic pathologies such as BC, where the impact of their diagnosis or treatment may increase the number of queries on the Internet and directly influence the understanding of health information [12]. Health literacy (HL) integrates the skills and motivation to find, understand, evaluate and use health information. As a result, HL facilitates informed decision making and improves the ability to manage and address health disparities, giving patients more autonomy and empowerment to take responsibility for their own health, as well as the health of their families and communities. In turn, low HL impacts health outcomes and health-related costs, leading to inefficient healthcare utilization and delivery [13]. DHL is an extension of HL that employs the same operational definition but in the context of information and communication technology resources. It involves both the provision of information and the degree to which information is understood. When these skills are lacking, technology solutions have the potential to either promote or hinder HL [14]. Due to the complexity of health information, it is recommended that DHL interventions be based on a design of co-creation of resources, websites and health tools through collaborative work with patients, allowing them to improve the medical care they receive [15,16,17]. Massive open online courses (MOOCs) are designed to engage a large number of participants learning remotely, offering the general population, clinical subpopulations or health professionals good quality knowledge on health issues through interactive and flexible technological resources, with little or no prior learning required [18]. To date, most MOOCs have been developed for the education of medical students and health professionals [19,20], but they have also been directed at the general population or clinical subpopulations, showing positive effects in several areas such as healthy nutrition habits [21], self-management of diabetes [22] or learning risk factors for dementia [23]. As has been observed in some projects with other populations, the development of educational interventions with a MOOC based on a co-creation design, which combines several resources in different formats and adapts to different educational, cultural levels and needs of the users, could be a strategy to face the HL, self-care and empowerment challenges for women with BC. One example is the IC-Health European project (https://cordis.europa.eu/project/id/727474/es accessed on 20 December 2022), whose results have shown good acceptance of co-created teaching resources aimed at improving the DHL of people with chronic diseases and the general population [24,25,26]. In recent years, the framework of participatory action research has been used for the development of eHealth. It is an approach that involves collaboration to develop a process through the construction of knowledge and social change in a community following a cyclical approach and involving stakeholders as co-investigators in the process [27,28]. As occurs in other participatory processes, the co-design of health interventions contributes to improving the services offered, to the extent that they are adjusted to the needs and priorities of its participants while incorporating their own skills [29,30,31]. *In* general, digital interventions, such as MOOCs, have the potential to improve the quality of life and outcomes for women with BC by providing access to information from anywhere at any time, thereby increasing accessibility and flexibility, as well as support to complement traditional medical treatments. Therefore, the aim of this study is to co-create a MOOC of PCC and DHL for and with women with BC. ## 2.1. Design The MOOC was co-created using a modified experience-based design approach [32]. The co-creation process was divided into three sequential phases: (a) exploratory phase, (b) development phase and (c) evaluation phase (see Section 2.3). ## 2.2. Participants and Recruitment Adult women (≥18 years) in any of their cancer stages and BC survivors (regardless of DHL level and knowledge about PCC), their families/carers and any healthcare professionals involved in the management of BC (oncologists, gynaecologists, nurses, psycho-oncologists, etc.) were invited to participate voluntarily in the MOOC co-creation process. A theoretical sample optimized the maximum variability of sociodemographic and clinical profiles (age, educational level, time since diagnosis and active treatment) of women with BC. The recruitment was carried out via snowball sampling [33] through healthcare professionals and expert patients (BC survivors) between May and June 2020. Participants signed an informed consent declaration. ## 2.3. Procedure The co-creation process was carried out in three online sessions of 120 min each (via the Zoom platform due to the COVID-19 pandemic and delivered by members of the research team) between June 2020 and March 2021 and was supported by a Moodle platform. The first session (exploratory phase) was held in June 2020 and consisted of (i) a brief presentation of the participants; (ii) identifying the different diagnosis, treatment and long-term follow-up paths for BC represented through a patient journey map (PJM)—a scheme that aims to reflect the care pathway followed by a person [5,32]—based on their experiences, emotions, feelings and thoughts; (iii) exploring their empowerment and information needs in each phase of the disease; (iv) and exploring patients’ information needs and experiences on patient empowerment and SDM. Health professionals did not participate in the development of the PJM; they offered advice and their experiences on the most frequent concerns found in clinical practice with these patients, according to the phase of the disease. In the second session (development phase), held in July 2020, the participants reviewed the PJM and designed the structure and proposed the contents of the MOOC (self-care, myths related to BC, strategies to improve DHL, etc.) based on the empowerment and information needs identified in the first session and their previous experiences managing BC information online. At the end of this session, participants were encouraged to continue the process of co-creation online between July and December 2020 through a Moodle platform where the participants were registered and which they accessed with an individual username and password (assessment phase). The research team developed and shared some content proposals weekly for the different units of the MOOC, and participants were asked to provide feedback and/or new content proposals (see Section 3.3). Initially, the content of the units was presented in infographic format (see Supplemental Materials Figure S1) and was mainly related to PCC, self-care and DHL applied to BC. Once all the suggestions for improvement provided by the participants on the content were compiled, a graphic designer developed videos and edited the infographics to provide them with interactivity and visually improve their appearance. Updated contents were shared again with the participants in March 2021. Through questionnaires on the Moodle platform (see Section 2.4.2), they could give feedback on the definitive contents of the MOOC (see Section 2.4.1). A third session (evaluation phase) was held in March 2021 to offer final feedback about the content and interface of the MOOC (acceptability pilot) and to evaluate the experience in the co-creation of the MOOC by means of specific questionnaires (see Section 2.4). Four gift cards were raffled off as a token of appreciation during this last meeting. ## 2.4.1. Experience in the Co-Creation Process A 13-item questionnaire was specifically developed to explore patients’ and healthcare professionals’ experience in the co-creation process. The first 6 items were measured using a 5-point Likert scale (from “strongly disagree” to “strongly agree”), addressing satisfaction with communication, objective adequacy, usefulness of patient involvement in the co-creation process, importance of co-creation to design relevant content for patients, self-perception of increased knowledge and feeling of being part of the team project. The following 4 items were also assessed on a 5-point Likert scale (from “insufficient” to “excellent”) and were related to participants’ opinions on the quality and clarity of the co-creation sessions, the methodology employed, the interactions between participants and the researchers’ implication. The last 3 items were open-ended questions about what participants liked the most and the least about the MOOC co-creation process, which aspects they found most useful and which aspects could be improved in the co-creation process (see Section 3.4). ## 2.4.2. Acceptability Pilot of the MOOC The MOOC’s acceptability was evaluated using a specific scale created in the context of the project following the technology acceptance model’s (TAM) methodology [34] and based on previous related studies [35,36]. This scale assessed factors such as ease of navigation, clarity of objectives and language, appropriateness of learning activities and quizzes, and other characteristics of the MOOC. The acceptability questionnaire, answered by both patients and healthcare professionals, included 18 items: the first 15 were rated on a 5-point Likert scale, and the last 3 items were open-ended questions about strengths and weaknesses, improvement suggestions and the main points learned throughout the MOOC (see Section 3.5). ## 2.5. Analysis The PJM and MOOC content were progressively developed in conjunction with participants. A draft was created with the information obtained from the online co-creation sessions. The different sections of the PJM summarize the experiences of participants with BC or survivors. The research group reviewed the contributions of the participants and proposed a draft version based on a PCC framework. Subsequently, this version of PJM and MOOC content was reviewed by all participants through an iterative process until consensus was reached. For the experience in the co-creation process and the acceptability pilot of the MOOC measures, means and standard deviations (SD) were calculated for all items assessed, and we also analysed the response distribution for each item. ## 3. Results Twenty-eight participants from Tenerife and Gran Canaria (Canary Islands, Spain) were contacted between May and June 2020, of whom 19 participated in the co-creation process: 17 patients (Table 1) and two healthcare professionals (nurses from gynaecology and breast pathology units; mean age 40 (1.41) years and with more than 10 years of professional experience). ## 3.1. Patient Journey Map Points of contact, experience with healthcare received, emotions, feelings and thoughts, diagnostic and therapeutic treatments, and perception about own participation in shared decision making for the three stages of the trajectory of care of BC (early detection and diagnosis, treatment and long-term follow-up) were collected on the co-designed PJM (Figure 1). ## 3.1.1. Early Detection and Diagnosis Stage Most of the participants received their diagnosis during routine controls (specialized care) or as a result of the presence of symptoms (primary care), and the main emotions that emerged during this time were shock, anxiety, uncertainty and worry about the future. The main diagnostic techniques that the participants underwent were physical examination (palpation), imaging tests (mammography and ultrasound) and biopsy. The experiences collected about the healthcare received in this stage were related to the perception of professionalism, friendliness, a predisposition to resolve doubts and the transmission of calm and encouragement from the healthcare professionals who attended to them. However, the participants expressed that there were other drawbacks in the medical care received at this time related to the challenges of early detection and the complexity of some administrative processes (e.g., medical appointments). Some participants expressed that they would have liked more advice from medical staff. Other participants expressed that they felt involved in the decision-making process in this phase, and this helped them accept the disease and have trust in the therapeutic approach to be used. ## 3.1.2. Treatment Stage Participants identified the involvement of other healthcare professionals (e.g., oncology, gynaecology, surgery and rehabilitation, among others). While uncertainty remained the predominant emotion in this stage, other emotions started to emerge as well, including concern for appearance and shock by the physical changes that were occurring as a result of the therapeutic techniques used in this stage (e.g., chemotherapy, radiotherapy, surgery, etc.). *In* general, the participants experienced empathetic care and a certain psychological accompaniment by the healthcare professionals who assisted them. The participants felt more involved in the decision-making process in the gynaecology units than in the oncology units. They all concurred that the experience of informed participation in their treatment process was positive. ## 3.1.3. Long-Term Follow-Up Stage The main experience was less follow-up by healthcare professionals, giving rise to feelings of helplessness or loneliness and uncertainty about self-care. Other concerns, such as going back to work or looking for a new job more adapted to their health needs, were shared among the participants. The treatments at this stage focused on breast reconstruction surgery and medication. All the participants said they had received limited information on self-care, medical care to follow from this stage and possible new treatments required by healthcare professionals. However, they commented that at this stage they felt empowered to choose the aspects of their health in which they wanted to be involved, leading them to request personalized attention and to ask questions in order to be more involved in the decision-making process. ## 3.1.4. Recommendations of the Participants for Other Women with BC or Survivors Additionally, and at their own initiative, the participants in the co-creation sessions provided a series of recommendations or tips for other women diagnosed with BC and suggested their inclusion in the MOOC as another resource. These tips were about family, social, work and empowerment areas and specifically for each of the stages worked (Figure 2). ## 3.2. Empowerment and Information Needs Figure 3 shows the empowerment and information needs identified in each phase. The main empowerment needs identified were related to strategies for emotional management and guidelines for self-care throughout the process from diagnosis until long-term follow-up. The main information needs were related to the lack of understanding of the meanings of biomarkers, parameters and acronyms found in reports, as well as medical jargon, treatment options and the likelihood of cancer recurrence. The need to have guidelines for accessing information and support resources available online, including association websites and online experiences of other women with BC, was highlighted. ## 3.3. MOOC Content Development Between July and October 2020, a weekly activity was published on the Moodle platform to carry out the process of co-design of the MOOC content. Table 2 shows the themes of these activities. Finally, the MOOC was composed of five units: (i) BC (definition, types and stages, diagnostic process, treatments, myths, etc.), ( ii) PCC (definition, implementation strategies, tips for preparing consultations with the healthcare professional, etc.); ( iii) DHL (definition, guidelines to improve each skill, etc.), ( iv) self-care (management of physical side effects, emotional management, etc.) and (v) experiences and advice from patients in different areas (healthcare, family, social and work area) and moments of the disease (diagnosis, treatment and long-term follow-up). ## 3.4. Experience in the Co-Creation Process Data was available for seventeen participants ($89.47\%$) (Table 3). All of them strongly agreed or agreed that the general objectives of the project were adequate (item 2) and that the participation of women who have or have had BC is useful for the development of a MOOC on this content (item 3). More than $88\%$ of the participants strongly agreed or agreed that being part of the MOOC co-creation process made the content more relevant to them (item 4) and rated the quality of the activities carried out in the co-creation process (item 7) and the methodology applied (item 8) as very good or excellent. Regarding open questions, participants appreciated the way their experiences were incorporated into the MOOC and how they felt part of something meaningful, sharing experiences with other women in similar situations (item 11). In order to fully engage in the co-creation process, participants expressed that they would have liked to attend a face-to-face session. Additionally, some participants found it challenging to devote more time to the MOOC due to personal issues (item 12). See Supplementary Materials Table S1 to consult illustrative quotes from participants’ responses to open questions. ## 3.5. Acceptability Pilot of the MOOC Data was available for seven participants ($36.84\%$) (Table 4). Combining the “totally agree” and “agree” categories, most of the participants positively evaluated the acceptability of the MOOC in terms of language, content, relevance, proposed activities and suitability of the MOOC objectives. Regarding open questions, most participants emphasized the usefulness of the MOOC’s content (especially related to SDM) and the way it is presented (through infographics and other audio-visual materials) as strengths. Nevertheless, one participant pointed out some navigation difficulties, while another emphasized the lengthy process (item 16). When it was possible, improvements suggested by participants were implemented, such as adding an initial summary of the MOOC’s content (item 17). All the contents were mentioned as important topics learned after completing the MOOC (item 18). See Supplementary Materials Table S2 to consult illustrative quotes from participants’ responses to open questions. ## 4. Discussion This study presents the development of a MOOC aimed to improve the DHL of women with BC. We used a co-creation approach involving 17 patients and survivors and two nurses. In order to inform the content of the MOOC, we explored participants’ perceptions of the extent they were involved in the decision-making process, as well as their feelings, emotions and information needs throughout the therapeutic process. Most participants indicated that the MOOC co-creation experience was positive and made them feel involved in the project, and they positively valued the final product. Similar results were obtained by our team with other MOOCs developed for pregnant and lactating women [26] and people with type-2 diabetes [25], including larger samples than the one used in this study. In these two studies, participants’ self-perceived DHL significantly improved after completing the MOOC development compared to baseline. Future work is warranted to evaluate the effectiveness of this MOOC at improving BC patients’ actual DHL (not only self-perceived), objective knowledge of the disease and treatments, and their involvement in treatment decisions. Women in this study pointed out information needs concerning different stages of the cancer, from diagnosis to long-term follow-up, as shown in previous studies [37,38]. Increasingly, these patients want to be involved in the decisions related to their health, and some studies have focused on involving the patient experience to improve the healthcare they receive [31,39]. As a result of an exchange of information and values between patients and healthcare providers, SDM engages patients as partners in their own care and optimizes the decision-making process [40]. To support SDM and the use of PtDA in the practice, it is important that patients also have a certain level of HL to increase patient empowerment and allow them to adopt a more participatory role in their healthcare [41]. Online interventions that provide information and support to women with BC appear to cushion the uncertainty they experience at different stages of the disease, and MOOCs can be an effective educational resource for meeting these unmet needs and promoting both DHL and SDM processes [24,39]. The PJM considered the evolving requirements for empowerment during the stages of diagnosis, treatment and long-term follow-up. Knowledge of the patients’ experiences, through a PJM, facilitates the identification of key moments in which to provide more precise information [5]. As we have seen in the results of this study, depending on the individual experiences of each woman, the care received during various BC periods could be perceived as more or less satisfactory. Based on our results, women with BC positively valued the experience of participating in the co-creation process of the MOOC, which made the content more relevant to them. This result aligns with previous evidence suggesting that a user-centred design process involves the participation of groups of users throughout the entire development cycle, during which they describe the context in which the generated resources will be used, their needs as users, and take part in user tests [42,43]. These are all contributions for designing and building health information technology through iterations [44]. This intervention represents an opportunity to reach a larger population that, due to health, availability and/or travel circumstances, may find it impossible to attend another type of face-to-face training on this subject. Technology provides great options for enhancing patient care; however, disparities in access and DHL continue to negatively impact vulnerable populations because of potential barriers in the digital sphere for those with low HL [11]. This problem can be especially aggravated as more information is provided online and healthcare professionals must be involved in the development of these skills in their patients with BC, but they also require support and a strategy at the institutional level. Therefore, healthcare organizations must prioritize achieving accessibility for all patients when designing eHealth services [11]. In this regard, the integration of educational materials designed by a representative sample of the target population to which they are addressed makes this proposal an opportunity to contribute to obtaining relevant health results for both affected patients and their healthcare professionals and, ultimately, decision-makers with financial capacity. From a managerial perspective, healthcare organizations should reframe their strategies, procedures and approaches, embracing a patient-centred perspective to become health literate [45]. From a policy perspective, it suggests that individual HL and organizational HL should be handled as two complementary tools to empower people and to engage them in self-care and health policy making [45]. The main strength of this project is having involved the intended audience in the creation of MOOCs, which enhances the significance of the material covered and how it is delivered. This is important because they have valuable insights and perspectives on the subject matter and can provide feedback on the relevance and effectiveness of the content and its delivery. This can lead to the creation of more engaging and effective MOOCs that better meet the needs and expectations of the intended audience. Nonetheless, there are several limitations to the study. Initially, it had been proposed that the co-creation process be based on face-to-face sessions with the participants followed by some online sessions through the Moodle platform. However, due to the COVID-19 pandemic, face-to-face sessions were replaced by online sessions carried out through the Zoom platform. This fact made the co-creation process last a few weeks longer than expected by adapting the work rhythms to the availability and web resources of the participants. However, the online sessions had several advantages: participants did not have to travel, the meetings were easier to organize and fewer financial resources were needed to support the development of the sessions. Another limitation is that, although all professionals related to BC were invited to participate, only two nurses did so. Perhaps the participation of other professionals involved in the process (e.g., gynaecologists and oncologists), as well as family members and/or caregivers, could have been beneficial for the generation of more useful resources. Even though women of all educational levels participated, the majority had higher education, so there was not much variability in this regard and lower educational levels may have been under-represented. In addition, there is a need for independent evaluation of acceptability to confirm the results obtained. Likewise, it is necessary to carry out an evaluation of the effectiveness of the MOOC with an independent sample that allows us to know if there really is an improvement in the levels of DHL and a change in knowledge in all the areas that are included in the different modules of the MOOC (BC, PCC, DHL, etc.). ## 5. Conclusions The work carried out in this project is an example of how the development of educational interventions in MOOC format directed and designed by women with BC, with resources in different formats adapted to different educational/cultural levels and needs of the users, seems to be a viable strategy to generate higher-quality and useful resources for this population. The co-creation methodology and this type of resource aim to address the literacy and empowerment challenges of women with BC.
# Palmitic Acid Regulation of Stem Browning in Freshly Harvested Mini-Chinese Cabbage (Brassica pekinensis (Lour.) Rupr.) ## Abstract The effect of palmitic acid (PA) on stem browning was investigated in freshly harvested mini-Chinese cabbage (Brassica pekinensis). Results indicated that concentrations of PA ranging from 0.03 g L−1 to 0.05 g L−1 inhibited stem browning and decreased the rate of respiration, electrolyte leakage, and weight loss, as well as the level of malondialdehyde (MDA) in freshly harvested mini-Chinese cabbage stored at 25 °C for 5 d. The PA treatment enhanced the activity of antioxidant enzymes (ascorbate peroxidase (APX), catalase (CAT), peroxidase (POD), 4-coumarate:CoA ligase (4CL) and phenylalamine ammonia lyase (PAL)), and inhibited the activity of polyphenol oxidase (PPO). The PA treatment also increased the level of several phenolics (chlorogenic acid, gallic acid, catechin, p-coumaric acid, ferulic acid, p-hydroxybenzoic acid, and cinnamic acid) and flavonoids (quercetin, luteolin, kaempferol, and isorhamnetin). In summary, results indicate that treatment of mini-Chinese cabbage with PA represents an effective method for delaying stem browning and maintaining the physiological quality of freshly harvested mini-Chinese cabbage due to the ability of PA to enhance antioxidant enzyme activity and the level of phenolics and flavonoids during 5 d. ## 1. Introduction Mini-Chinese cabbage (*Brassica pekinensis* (Lour.) Rupr.) is a green leafy vegetable in the family Cruciferae [1]. It is a common component of Asian diets and is becoming increasingly used in Western diets [2,3]. It has great health benefits, including anticancer, anti-obesity, and antioxidant effects [2,4]. Freshly harvested mini-Chinese cabbage, however, is very susceptible to browning, vitamin loss, softening, and the production of off-flavors, which decline its economic value [5,6]. Stem browning represents a major factor affecting the quality of freshly harvested mini-Chinese cabbage, reducing its appearance and consumer acceptance [7]. Thus, stem browning reduces the marketable shelf life of mini-Chinese cabbage. In addition to appearance, stem browning also affects the flavor of mini-Chinese cabbage, rendering it inedible. Browning is one of the most significant defects of leafy vegetables [8]. Polyphenol oxidase (PPO) induces the synthesis of phenolics when leafy tissues are injured, which are then converted to quinones [9], resulting in a rapid browning reaction in leaf tissues. Plants also produce several antioxidant enzymes, phenolics, and flavonoids to counteract the excessive production of reactive oxygen species (ROS) in response to tissue injury. Excessive ROS accumulation induces the peroxidation and breakdown of unsaturated fatty acids in membrane lipids [10]. Several different treatments have been used to protect leafy vegetables from browning and maintain their quality during storage. Application of dimethyl dicarbonate has been reported to reduce browning in Chinese cabbage and N-phenyl-N-(2-chloro-4-pyridyl) urea was also reported to regulate browning in Chinese flowering cabbage [11,12]. A combination of 0.001 g L−1 4-hexylresorcinol, 0.05 g L−1 potassium sorbate, and 0.025 g L−1 N-acetylcysteine was also reported to prevent browning of radish slices [13]. The use of palmitic acid (PA) to regulate browning has also been investigated in longan fruit [14,15]. PA (16:0) is one of the common saturated fatty acids in humans and can be obtained from ingested foods or synthesized from other carbohydrates, fatty acids, and amino acids [16]. PA is also the main component of fatty acids in cell membranes. Previous studies in longan fruit (Dimocarpus longan) found that the content of saturated fatty acids (including PA and stearic acid) increases when fruit starts to brown, which affects the structural integrity of cell membranes [14,15]. The structural changes in membranes induce the synthesis of PPO and phenolics, which accelerate browning [17]. Current studies indicate that PA content is enhanced during browning but that exogenous PA may inhibit browning. Thus, the role of PA in browning and the physiological response of plant tissues to PA requires additional investigation. While the effect of PA has been investigated in animal cell experiments, no reports have been published on the effect of PA on stem browning in freshly harvested mini-Chinese cabbage. This study’s object was to determine the effect of PA on freshly harvested mini-Chinese cabbage. We assessed the effect of PA on stem browning, respiration rate, electrolyte leakage, MDA content, antioxidant enzyme activity, and the level of phenolics and flavonoids in mini-Chinese cabbage during storage. ## 2.1. Plant Materials and Treatments Mature mini-Chinese cabbages (*Brassica pekinensis* (Lour.) Rupr. cv ‘Xiaoqiao’, Beijing Shinong Seeds Co., Ltd., Beijing, China) were harvested from a farm in Xiaotangshan, Beijing, China. Mini-Chinese cabbages were approximately 0.25 m in height and harvested 60 d after planting. Harvested plants were transported to laboratory within 3 h. Mini-Chinese cabbages utilized in this study were 8–9 cm in diameter and free of any evidence of pests, disease, or mechanical damage. The freshly harvested mini-Chinese cabbages were divided into 4 groups; each group was 25 cabbages. Individual groups were immersed in either 0.03 g L−1, 0.04 g L−1, or 0.05 g L−1 PA dissolved in ethanol (Aladdin, AR, Shanghai, China) for 30 s with immersion in just ethanol serving as the control. The treated cabbages were placed in trays and air-dried for 10 min, after which the trays were covered with 0.03 mm polyethylene film (there is no hole on the packaging film). The cabbages were then placed in storage at 25 °C and 85–$90\%$ relative humidity in a constant temperature and humidity warehouse. Leaves and stems tissues were sampled at 0, 1, 2, 3, 4, and 5 d of storage and immediately ground to a powder in liquid nitrogen and stored at −80 °C until being further processed. Each experiment and each of the subsequent assays utilized three replicates, and the experiment was repeated three times ($$n = 9$$). ## 2.2. Weight Loss and Respiration Rate Weight loss was measured as described by Duan et al. [ 18]. Weight loss was expressed as the percentage loss from the original weight and calculated using the formula:Weight loss (%) = $100\%$ × (Initial weight − final weight)/Initial weight[1] The respiration rate was measured using a F-940 Gas Analyzer (Felix, Washington, DC, USA). A 500 g sample of cabbage was placed in a 1 L gas-tight box for 1 h after which the concentration of CO2 was determined. Results are expressed as mg CO2 kg−1 h−1. ## 2.3. Color and Browning Index (BI) An CR-400 automated colorimeter (Konica Minolta Holdings, Inc., Tokyo, JAPAN) was used to measure the color and browning index (BI) [19]. L*, a*, and b* of sample were determined and the BI was calculated using the formula as follows:[2]BI=100×(x−0.31)0.172+180 [3]x=a*+1.75L*5.645L*+a*−3.012b* ## 2.4. Electrolyte Leakage and MDA Content Electrolyte leakage was measured with a DDS-11A conductivity meter (Shanghai Instrument and Electronic Scientific Instruments, Ltd., Shanghai, China) by the method of Li et al. [ 12]. A leaf disc with a diameter of 1 cm was collected from ten different leaves and each disc was placed in 25 mL of distilled water (dH2O). Conductivity of the solution was measured after 2 h of incubation at room temperature and used as the initial value (P1). The solutions containing the leaves were then boiled at 100 °C for 5 min, cooled, and conductivity (P2) was again assessed. Percentage electrolyte leakage was calculated using the method as follows: P1/P2 × 100. MDA content was determined by the method of Xu et al. with minor modifications [20]. Samples (1 g) were homogenized in 5 mL of $10\%$ (w/v) trichloroacetic acid (TCA) (Shanghai Macklin Biochemical Co., Ltd., AR, Shanghai, China) and centrifuged at 12,000× g for 15 min at 4 °C. Then, 1 mL supernatant was mixed with 3 mL of $0.5\%$ (w/v) thiobarbituric acid (Shanghai Macklin Biochemical Co., Ltd., Analytical reagent(AR), Shanghai, China), boiled 20 min. After this, the absorbance (UV-1800 spectrophotometer, Shimadzu, Tokyo, Japan) of the solution was determined at 450, 532 and 600 nm. MDA content was calculated (μmol L−1) = {[6.45 × (OD532 − OD600) − 0.56 × OD450] × V}/(VS × m × 1000), where V is the total volume of the extracting solution, mL; *Vs is* the measured volume of the extracting solution, mL; and m is the weight of the sample, g. ## 2.5. Total Phenolics and Total Flavonoids Frozen samples of cabbage (2 g) were powdered and mixed with 10 mL $80\%$ ethanol (AR, Aladdin, Shanghai, China) (diluted by dH2O), and extracted for 40 min at 40 °C. The mixture was then centrifuged (D-37520 centrifuge, Beijing Chengmao Industrial Science & Development Co., Ltd., Beijing, China) at 12,000× g for 25 min at 4 °C. The supernatant was used for measuring total phenolics and flavonoids. Total phenolics were determined using Folin–Ciocalteu (FC) (Shanghai Yuanye Bio-Technology Co., Ltd., Shanghai, China) reagent by the method of Fan et al. [ 21]. Test tubes containing 800 μL dH2O and 200 μL FC reagent were prepared and then 400 μL supernatant was added to the test tube, vortexed, and incubated about 3 min at 20 °C. Subsequently, 400 μL $20\%$ (w/v) Na2CO3 (Aladdin, AR, Shanghai, China) and 1.2 mL dH2O were added to the mixture. Then a water bath (XMTD-6000 water bath kettle, Yuyao Jindian Instrument Co., Ltd., Yuyao, China) was prepared at 20 °C for 60 min. Absorbance of the sample solutions at 760 nm were then measured in a spectrophotometer (UV-1800 spectrophotometer, Shimadzu, Tokyo, Japan) and used to determine the level of total phenolics based on the use of a standard curve constructed using different concentrations of gallic acid (Shanghai Yuanye Bio-Technology Co., Ltd., Shanghai, China). Flavonoid content was measured using the method by Zhou et al. [ 22] with some modification. Amounts of 1 mL supernatant, 0.25 mL of $10\%$ (w/v) AlCl3 (Aladdin, AR, Shanghai, China) and 1 mL $5\%$ (w/v) NaNO2 (Aladdin, AR, Shanghai, China) were mixed. After 5 min, 1 mL of 1.0 mol L−1 NaOH (Aladdin, AR, Shanghai, China) was added to the mixture. Catechinic acid (Shanghai Yuanye Bio-Technology Co., Ltd., Shanghai, China) was used as a standard to calculate the flavonoid content. Absorbance of the resulting sample solution was measured at 510 nm and used to determine flavonoid content. The concentration of total phenolics and flavonoid content were expressed as g kg−1. The level of specific phenolics (gallic acid, catechin, chlorogenic acid, p-Hydroxybenzoic acid, p-Coumaric acid, ferulic acid, and cinnamic acid) (Shanghai Yuanye Bio-Technology Co., Ltd., Shanghai, China) and specific flavonoids (quercetin, kaempferol, luteolin, and isorhamnetin) (Shanghai Yuanye Bio-Technology Co., Ltd., Shanghai, China) were determined by HPLC (1260, Agilent Technologies Co., Ltd., Palo Alto, CA, USA) according to the method by Xu et al. [ 20]. Briefly, 2 g sample powder were mixed with 2 mL of methanol (Aladdin, GR, Shanghai, China), which was ultrasonicated for 1 h (40 kHz), after which the mixture was centrifuged at 10,000× g for 15 min. The supernatants were utilized in the HPLC analysis. Conditions were as follows: utilization of an Eclipse Plus C18 (250 mm × 4.6 mm, 5 μm) column, with a temperature of 30 °C. The detector was a UV-detector (280 nm). The mobile phase consisted of $1\%$ of formic acid-water (A) and methanol ($100\%$) (B) and the gradient elution conditions were 0–3 min, $15\%$ to $30\%$ B; 3–35 min, $30\%$ to $45\%$ B; 35–45 min, $45\%$ to $65\%$ B; 45–50 min, 65–$15\%$ B. ## 2.6. CAT, APX, PPO, and POD Enzyme Activity CAT and APX enzyme activity was assessed according to the method by Wang et al. [ 23]. Briefly, 2.0 g of cabbage powder was mixed with 10 mL of 0.1 mol L−1 phosphate buffer solution (PBS, pH 7.8 containing 0.05 g of polyvinylpyrrolidone (PVPP, Golden Clone (Beijing) Biotechnology Co., Ltd., Beijing, China)), and then centrifuged at 12,000× g for 10 min at 4 °C. The supernatant was utilized to determine CAT and APX activity. Amounts of 0.1 mL of supernatant, 1 mL of $0.3\%$ (v/v) dH2O2, and 1.9 mL 0.05 mol L−1 PBS (pH 7.8) were mixed. Absorbance of the resulting solution was determined at 240 nm to calculate CAT activity. APX activity reaction mixture included 1.2 mL supernatant, 2.6 mL of 0.1 mmol L−1 EDTA (Shanghai Macklin Biochemical Co., Ltd., AR, Shanghai, China) and 0.5 mmol L−1 AsA (Aladdin, GR, Shanghai, China) (in PBS, pH 7.5), and 0.3 mL of 2 mmol L−1 dH2O2. Absorbance was measured at 290 nm to calculate APX activity. PPO and POD activity were determined by the method of Zhou et al. with slight modification [22]. Frozen cabbage powder (2 g) was added to 10 mL of 0.1 mol L−1 of PBS (pH 6.4), containing 0.05 g PVPP. The resulting mixture was then centrifugated at 12,000× g for 30 min at 4 °C. Next, 0.1 mL of supernatant was mixed with 0.6 mL of 50 mmol L−1 catechol (Nantong Runfeng Petrochemical Co., Ltd., AR, Nantong, China) substrate to measure PPO activity, while 0.9 mL of $0.2\%$ guaiacol (Nantong Runfeng Petrochemical Co., Ltd., AR, Nantong, China) (v/v) was mixed with 1 mL of $0.3\%$ dH2O2 (v/v) to measure POD activity. Absorbance was measured at 410 nm and used to calculate PPO activity and at 470 nm to calculate POD activity. ## 2.7. PAL and 4CL Enzyme Activity The assessment of PAL activity was conducted by Kamdee et al. with slight modification [24]. Briefly, 1 g of powdered cabbage leaf tissue was added to 4 mL of 50 mmol L−1 borate buffer (BBS, pH 8.5) including 5.0 mmol L−1 2-mercaptoethanol (Aladdin, GR, Shanghai, China) and 0.4 g PVPP. The resulting mixture was then centrifuged at 12,000× g for 20 min at 4 °C. An amount of 0.3 mL supernatant was added to 0.7 mL of 100 mmol L−1 l-phenylalanine (AR, Xiya Reagent, Linyi, China) and 3 mL of 50 mmol L−1 borate buffer (BBS, pH 8.5). The mixture was incubated at 40 °C for 1 h and then 0.1 mL of 5 mmol L−1 HCl (Beijing Institute of Chemical Reagents, AR, Beijing, China) was added to stop the reaction. Absorbance of the solution was measured at 290 nm to calculate PAL activity. The level of 4CL enzyme activity was determined using a commercial assay kit (Comin Biotechnology Co., Ltd., Suzhou, China). Briefly, 2.0 g of sample were homogenized in 10 mL of extraction buffer, and the resulting mixture was then centrifuged at 8000× g for 10 min at 4 °C. Absorbance at 333 nm was measured to calculate 4CL enzyme activity. ## 2.8. Data Analysis SPSS ver. 22 (SPSS Inc., Chicago, IL, USA) software was used to conduct the statistical analyses. Data were subjected to a two-way ANOVA and mean separations were performed using a Pearson’s multiple range test. In the case of single mean comparisons, data were subjected to an LSD analysis in which differences at $p \leq 0.05$ were considered significant. All results presented are means ± standard deviation (SD). ## 3.1. Stem Browning The visual color of cabbage stems is an excellent indicator of their degree of browning. The level of stem browning in the control group greatly reduced their visual quality over the 4 d of storage (Figure 1A). Cabbage stems treated with PA, however, exhibited a slower rate of browning than the control over the 4 d of storage. This was also reflected in the BI (Figure 1B). The BI increased during storage in both PA-treated and control cabbage stems; however, the level of BI was lower in PA-treated samples than it was in the control group (Figure 1B). Notably, cabbage stems treated with 0.05 g L−1 PA had the lowest BI relative to the control and to cabbages treated with lower concentrations of PA. The BI is calculated from the values obtained for L*, a*, and b*. Figure 1C–E illustrate the changes in L*, a*, and b* in stems of PA-treated and control groups over 5 d of storage at 25 °C. The level of L* in both control and PA-treated samples decreased during storage; however, in the control group the L* value was lower than it was in PA-treated cabbages during the entire course of storage. In contrast, a* and b* exhibited an increasing trend, with the level of increase in the different treatment groups ranging from the control > 0.03 g L−1 PA > 0.04 g L−1 PA > 0.05 g L−1 PA. ## 3.2. Respiration Rate, Weight Loss, Electrolyte Leakage, and MDA Content The respiration rate exhibited a rapid increase after cabbages were placed in storage, reaching its highest level after 2 d, after which the respiration rate decreased and maintained a relatively stable value (Figure 2A). Peak respiration rates were 17.752 (control), 14.063 (0.03 g L−1 PA), 11.782 (0.04 g L−1 PA), and 10.625 (0.05 g L−1 PA) mg CO2 kg−1 h−1. The percentage weight loss increased in all groups (Figure 2B); however, the weight loss in control samples was higher than it was in any of the PA-treated samples from 1 to 5 d of storage. The percentage weight loss in the control group was $0.735\%$, which was higher than the $0.517\%$ weight loss observed in the 0.05 g L−1 PA-treated sample group after 5 d. Electrolyte leakage also showed an increasing trend in all groups (Figure 2C). The increase in the control group was higher than that in the PA-treated groups starting from 2 d. The 0.03 and 0.04 g L−1 PA-treated cabbages exhibited a similar pattern to each other and were higher than they were in the 0.05 g L−1 PA-treated samples. Electrolyte leakage had increased to $0.216\%$, $0.192\%$, and $0.181\%$ in the 0.03, 0.04, and 0.05 g L−1 PA treatment groups, respectively, and $0.290\%$ in the control group, after 5 d. MDA levels increased before the first 3 d of storage in all treatment groups, and then declined (Figure 2D). Lower MDA levels were observed in PA-treated samples than in the control group. Peaks of MDA content in the 0.03, 0.04, and 0.05 g L−1 PA-treated samples were 1.868, 2.111, and 1.213 μmol kg−1, compared to the control group (3.080 μmol kg−1) of mini-Chinese cabbages. ## 3.3. PPO, 4CL, PAL, APX, CAT, and POD Activity PPO activity in the control group was higher than it was in the PA-treated samples (Figure 3A), exhibiting an increasing trend during the first 3 d in all groups and then declining. Peak PPO activity was 0.363 units in the control group, 0.325 units in the 0.03 g L−1 PA-treated group, 0.108 units in the 0.04 g L−1 PA-treated group, and 0.095 units in the 0.05 g L−1 PA-treated group, all of which were lower than that in the control group. The enzymatic activity of 4CL rapidly increased in the first 2 d and then fluctuated from 2 to 5 d. Higher activity was observed in the PA-treated group. After 2 d of storage, 4CL activity was 800.312, 837.459, and 828.569 units in the 0.03, 0.04, and 0.05 g L−1 PA-treated samples, respectively. Changes of PAL activity in the control and PA-treated groups are shown in Figure 3C. PAL activity indicated an increasing trend in all groups during the first 2 d, with PA-treated groups generally exhibiting higher activity. The 0.05 g L−1 PA-treated samples exhibited the highest level of activity (93.600 units). An increase in APX activity was exhibited in the control and PA-treated groups during the first 2 d and then declined (Figure 3D). APX activity in the control group was lower (0.036 units) at 2 d of storage, while APX activity was 0.042 units, 0.045 units, and 0.053 units in the 0.03, 0.04, and 0.05 g L−1 PA treatment groups, respectively. The data indicated that CAT activity in all groups reached a maximum at 3 d and then decreased (Figure 3E). Although the pattern of CAT activity was similar in all of the treatment groups, CAT activity was higher in general in the PA-treated sample groups than it was in the control group. POD activity reached a maximum after 2 d of storage and then remained stable (Figure 3F). POD activity after 5 d of storage was 0.185, 0.185, 0.248, and 0.288 units in the control group and the 0.03, 0.04, 0.05 g L−1 PA treatment groups, respectively. ## 3.4. Total Phenolics, Gallic Acid, Catechin, Chlorogenic Acid, p-Hydroxybenzoic Acid, p-Coumaric Acid, Ferulic Acid, and Cinnamic Acid The level of total phenolics increased in both the control and PA-treated groups over the 5 d (Figure 4A). Although the trend was similar in all of the treatment groups, the levels of total phenolics in PA-treated groups were higher than they were in the control group. The level of total phenolics after 5 d of storage was 1.456 g kg−1, 1.863 g kg−1, 2.331 g kg−1, and 1.658 g kg−1 in the 0.03 g L−1 PA, 0.04 g L−1 PA, 0.05 g L−1 PA and control treatment groups, respectively. Cinnamic acid content gradually increased in all groups (Figure 4B). The highest level of cinnamic acid (2.361 mg kg−1) was observed at 3 d in the 0.05 g L−1 PA-treated group. The content of gallic acid exhibited an initial decrease at 1 d of storage and then an increasing trend in all groups from 2 to 5 d (Figure 4C). The content of gallic acid after 5 d of storage was 4.030 mg kg−1, 5.067 mg kg−1, 5.615 mg kg−1, and 6.873 mg kg−1 in the control, 0.03 g L−1 PA, 0.04 g L−1 PA, and 0.05 g L−1 PA treatment groups, respectively. The chromatogram and standard curve are in Figure S1. Catechin content exhibited an increasing trend in the control and PA-treated groups during the first 4 d and then declined (Figure 4D). At 4 d, the 0.04 and 0.05 g L−1 PA-treated samples exhibited a higher level (20.632 and 20.293 mg kg−1) of catechin. Chlorogenic acid content exhibited an increasing trend during storage, with PA-treated groups having a higher content than that in the control group (Figure 4E). Chlorogenic acid content at 5 d was 21.757 mg kg−1 (0.03 g L−1 PA), 20.866 mg kg−1 (0.04 g L−1 PA), 29.859 mg kg−1 (0.05 g L−1 PA), and 17.276 mg kg−1 (control). The level of p-Hydroxybenzoic acid showed an enhanced trend during storage in PA-treated groups (Figure 4F). The level of activity in the 0.05 g L−1 PA treatment group exhibited a change at 3 d (7.640 mg kg−1), a level that was much higher than in other groups. The level of p-Coumaric acid exhibited a slight to high increase in all groups during the first 3 d, and then declined. The level of p-Coumaric acid exhibited a peak of activity in the 0.03 g L−1, 0.04 g L−1, and 0.05 g L−1 PA treatment groups at 3 d of storage when levels increased to 3.818 mg kg−1, 3.822 mg kg−1, and 9.083 mg kg−1, while the level in the control group remained stable (Figure 4G). The level of ferulic acid fluctuated in all groups (Figure 4H). The level of ferulic acid in the 0.05 g L−1 PA-treated group at 3 d was 6.973 mg kg−1, which was more than 1.28 times greater than it was in the other treatment groups. ## 3.5. Total Flavonoids, Quercetin, Luteolin, Kaempferol, and Isorhamnetin Flavonoid content exhibited the same trend as total phenolics, increasing with storage time in all the treatment groups (Figure 5A). The highest content of flavonoids (1.455 g kg−1) was observed in the 0.05 g L−1 PA-treated group and was higher (1.63 times greater) than in the control group. Quercetin content increased before the first 4 d and then decreased (Figure 5B). Quercetin content, however, was higher in the 0.05 g L−1 PA treatment group than it was in the control group during all the storage time. Luteolin content increased during the first day and then decreased in all groups (Figure 5C). Peak content was observed at 2 d in 0.03 g L−1, 0.04 g L−1, 0.05 g L−1 PA treatment groups, at which luteolin content was 6.788 mg kg−1, 7.418 mg kg−1, and 9.109 mg kg−1, respectively. Kaempferol levels greatly increased in PA-treated groups (Figure 5D). In contrast, kaempferol content changed only slightly in the control group. Kaempferol content in all of the PA-treated groups was higher than it was in the control group. Isorhamnetin content exhibited a decreasing trend in all groups (Figure 5E). Isorhamnetin content was higher in the 0.04 g L−1 and 0.05 g L−1 PA-treated groups than it was in other groups from 1 to 3 d. ## 3.6. Correlation Analysis Pearson coefficients were used to determine the correlation between the different measured parameters (Figure 6A). The analysis indicated that BI levels were significantly positively correlated with a*, b*, weight loss, electrolyte leakage, MDA content, and 4CL activity. BI levels were negatively correlated with L* (Figure 6A). Total phenol content was significantly positively correlated with the level of cinnamic acid, gallic acid, catechin, chlorogenic acid, p-Hydroxybenzoic acid, flavonoids, quercetin and kaempferol. Total phenol content was significantly negatively correlated with the content of luteolin and isorhamnetin (Figure 6B). ## 4. Discussion Previous studies have indicated that tissue browning in plants induces an increase in the level of PA, which may explain why exogenous PA can inhibit browning [14,15]. Mini-Chinese cabbage stems are prone to browning after cabbages are harvested and browning is the basis for a loss in quality. This study indicated that mini-Chinese cabbages treated with PA for 30 s exhibit a reduction in weight loss and the rate of respiration, and reduced levels of stem browning and electrolyte leakage. In contrast, antioxidant enzyme activity and many phenolics and flavonoid compounds were enhanced. These responses collectively helped to maintain the quality of cabbage heads in storage and extend their shelf life. The commercial value of mini-Chinese cabbages is dependent on their quality. Thus, it is essential to maintain their quality during storage. Weight loss and high respiration rates are known indicators of quality degradation in mini-Chinese cabbage and reports have indicated that strategies that suppress respiration and reduce evaporation in harvested plants help to maintain their postharvest quality [25]. In our study, weight loss and respiration rate were reduced in response to immersion of the cabbage heads in PA for 30 s. These results confirm that PA lowers the basal metabolism of mini-Chinese cabbage. The permeability of cell membranes increases when plants begin to experience senescence, which promotes the peroxidation of lipids [10]. MDA is commonly recognized as an indicator of oxidative stress and peroxidation [26], and increased electrolyte leakage and MDA levels serve as indicators of membrane degradation and peroxidation of membrane lipids, respectively [27]. Results exhibited that PA treatment reduced both the level of electrolyte leakage and MDA levels in mini-Chinese cabbages, relative to the untreated control. These results are similar to those in a previous study by Li et al. [ 12], who reported that electrolyte leakage and MDA levels in Chinese flowering cabbage were maintained during storage by treatment with N-phenyl-N-(2-chloro-4-pyridyl) urea (CPPU). Additionally, PA is the main fatty acid component of cell membranes and contributes to the resilience of cell membranes exposed to stress. Our collective results indicate that PA treatment of mini-Chinese cabbage can help to maintain their quality in storage. Browning is a complex process that can be affected by many enzymes and chemical compounds, in which tissue color is the most recognizable visual manifestation. In our study, the PA treatment delayed the process of stem browning, as indicated by a reduction in the BI in stems of mini-Chinese cabbage, relative to the control. High levels of PPO activity are known to induce enzymatic browning of plant tissues during storage by catalyzing the oxidation of phenolics to quinones [28]. In our study, PPO activity was positively correlated with BI in mini-Chinese cabbage. PAL and 4CL are enzymes that play an essential role in the synthesis of phenolics and their activity is a component of phenylpropane metabolism. Thus, increased PAL and 4CL activity can be responsible for an increase in the level of phenolics [29,30]. Our results revealed that the PA treatment enhanced both 4CL and PAL activity. This may be attributed to the ability of PA to reduce PPO activity which would have reduced the production of quinones [9]. PAL and 4CL activity would also increase the synthesis and accumulation of phenolics, resulting in enhanced antioxidant capacity [30]. Thus, the increase in PAL and 4CL activity by PA would increase antioxidant capacity. The increased accumulation of phenolic compounds would also enhance defense capacity [31]. Considerable evidence indicates that the oxidative stress resulting from excess production of ROS may induce the browning of plant tissues and that antioxidant enzymes, such as APX, CAT, and POD, play an essential role in reducing ROS levels and inhibiting browning [20,32,33,34]. In the present study, PA increased APX, CAT, and POD activity in mini-Chinese cabbage. These results confirm that PA plays a role in reducing excessive levels of ROS through its ability to enhance antioxidant enzyme capacity. Browning is a great stress in wounding fruit and vegetables during postharvest, a significant postharvest physiological disorder. Plant cells subjected to stress respond by activating two aspects of phenolic metabolism [35]. One involves the antioxidant properties of phenolic compounds, along with antioxidant enzymes, that work together to reduce oxidative stress. The other involves the use of monomeric and polymeric phenolic compounds, whose synthesis is catalyzed by PAL in the phenylpropanoid pathway, to seal off injured tissues. These monomeric and polymeric phenolic compounds have strong antioxidant properties and are also used to form a physical barrier against invading pathogens [31]. In our study, the level of phenolic compounds (including gallic acid, chlorogenic acid, p-Coumaric acid, catechin, p-Hydroxybenzoic acid, ferulic acid, and cinnamic acid) and flavonoids (such as luteolin, quercetin, kaempferol, and isorhamnetin) were enhanced during storage by the PA treatment, relative to the control group. These compounds represent monomeric and polymeric phenolics that play a role in inhibiting stem browning through their ability to scavenge ROS [36] and inhibit lipid oxidation [37]. Both activities would inhibit stem browning in min-Chinese cabbages. PA enhanced the level of both total phenolics and flavonoids, and the enhanced level of these compounds was associated with reduced stem browning in mini-Chinese cabbages during storage at 25 °C. ## 5. Conclusions Treatment of mini-Chinese cabbages with PA greatly inhibited weight loss, reduced the rate of respiration, and inhibited stem browning, which collectively served to maintain the overall postharvest quality of mini-Chinese cabbage stored at 25 °C for 5 d. The mechanism underlying the inhibition of stem browning in mini-Chinese cabbage by PA was associated with decreased levels of membrane damage, as evidenced by lower levels of electrolyte leakage and MDA, and an increase in antioxidant metabolism, as evidenced by higher antioxidant enzyme activity, and elevated levels of phenolics and flavonoids. The collective results of the present study determine that the application of PA has the potential to be used as a method to maintain the quality and to extend the shelf life of mini-Chinese cabbage after harvest and during storage.
# Artificial Intelligence for Evaluation of Retinal Vasculopathy in Facioscapulohumeral Dystrophy Using OCT Angiography: A Case Series ## Abstract Facioscapulohumeral muscular dystrophy (FSHD) is a slowly progressive muscular dystrophy with a wide range of manifestations including retinal vasculopathy. This study aimed to analyse retinal vascular involvement in FSHD patients using fundus photographs and optical coherence tomography-angiography (OCT-A) scans, evaluated through artificial intelligence (AI). Thirty-three patients with a diagnosis of FSHD (mean age 50.4 ± 17.4 years) were retrospectively evaluated and neurological and ophthalmological data were collected. Increased tortuosity of the retinal arteries was qualitatively observed in $77\%$ of the included eyes. The tortuosity index (TI), vessel density (VD), and foveal avascular zone (FAZ) area were calculated by processing OCT-A images through AI. The TI of the superficial capillary plexus (SCP) was increased ($p \leq 0.001$), while the TI of the deep capillary plexus (DCP) was decreased in FSHD patients in comparison to controls ($$p \leq 0.05$$). VD scores for both the SCP and the DCP results increased in FSHD patients ($$p \leq 0.0001$$ and $$p \leq 0.0004$$, respectively). With increasing age, VD and the total number of vascular branches showed a decrease ($$p \leq 0.008$$ and $p \leq 0.001$, respectively) in the SCP. A moderate correlation between VD and EcoRI fragment length was identified as well ($r = 0.35$, $$p \leq 0.048$$). For the DCP, a decreased FAZ area was found in FSHD patients in comparison to controls (t [53] = −6.89, $$p \leq 0.01$$). A better understanding of retinal vasculopathy through OCT-A can support some hypotheses on the disease pathogenesis and provide quantitative parameters potentially useful as disease biomarkers. In addition, our study validated the application of a complex toolchain of AI using both ImageJ and Matlab to OCT-A angiograms. ## 1. Introduction Facioscapulohumeral muscular dystrophy (FSHD) is a slowly progressive muscular dystrophy with a distinctive pattern of skeletal muscle weakness and a wide range of disease severity [1]. Subjects show progressive loss of muscle mass and strength, as well as replacement with fat and connective tissue in selected muscle groups [2], often with an asynchronous and asymmetrical pattern [3,4]. As first revealed by Fitzsimons et al. in 1987 [5] and then confirmed by Padberg et al. in 1995 [6], a retinal vasculopathy is considered an established component of the FSHD phenotype [7]. Traditional ophthalmologic findings in FSHD include retinal vascular tortuosity and retinal vessel abnormalities on fluorescein angiography (FA) such as teleangectasia, microaneurysms, areas of capillary closure, and fluorescein leakage due to increased permeability. The leakage of plasma constituents can occasionally lead to exudative retinal detachment [5]. A secondary glaucoma due to neovascularization can develop [8]. However, retinal vascular changes in FSHD patients are often subclinical. Current guidelines advise referral to ophthalmological specialists for FSHD patients with visual complaints or with a severe form of the disease. However, data on the frequency of assessment and the techniques to be used for accurate ophthalmological monitoring in FSHD are lacking. Minor retinal vascular alterations are undetectable with fundus examination; thus, fluorescein angiography (FA) is considered the gold standard for evaluating retinal vasculature [9]. However, FA is an invasive and time-consuming technique that allows the visualization of the superficial vascular plexus only [10], and dye leakage as well as haemorrhage or opacities can make the underlying retinal pathology undetectable. First adapted from optical coherence tomography (OCT), OCT-angiography (OCT-A) is a recently developed imaging technique that can non-invasively image all the layers of the retinal vasculature without dye injection by processing the motion of erythrocytes [11]. More specifically, OCT-A provides depth-resolved images of blood flow in the retina and choroid with a resolution level several times higher than that obtained with older forms of imaging [12], providing quantitative parameters such as the foveal avascular zone (FAZ) area and vessel density (VD). Despite these advantages, except for one study [13], updated information about ophthalmological findings in FSHD detected using OCT-A is missing. In medicine and healthcare, artificial intelligence (AI) has been primarily applied to the field of medical image analysis, where it has shown robust diagnostic performance. Over the past few years, AI has similarly been applied to ocular imaging, mainly fundus photographs, OCT, and OCT-A. A better understanding of retinal vasculopathy through OCT-A and AI-based analysis of angiograms appears particularly promising since it can provide quantitative parameters potentially serving as disease biomarkers. The aim of this study was to evaluate retinal vascular involvement in FSHD using colour fundus photography and swept-source OCT-A, analysed through artificial intelligence (AI). ## 2. Materials and Methods This retrospective study was approved by the Ethics Committee/Institutional Review Board of the Catholic University. This research adhered to the tenets of the Declaration of Helsinki and informed consent was obtained from all patients after full and detailed explanation of the goals and procedures of the study. All the clinical and imaging data reported in this study were retrospectively re-evaluated. Recruitment was performed according to a collaboration protocol with the Department of Neurology of Università Cattolica del Sacro Cuore, Fondazione Policlinico Universitario Agostino Gemelli IRCCS. ## 2.1. Inclusion and Exclusion Criteria Inclusion criteria were an established clinical diagnosis of FSHD type 1, confirmed by genetic testing (presence of a 4q35 BlnI resistant, p13-E11 EcoRI fragment whose length was <40 kb), and the possibility of obtaining good quality ocular imaging. A pre-existing dataset of healthy controls was used for comparison. ## 2.2. Neurological Examination Patients underwent a complete neurological examination inclusive of the clinical severity score (CSS) [14] to assess disease severity. The CSS is a 10-grade scale, which takes into account the extent of weakness in various muscular districts, considering the descending spread of symptoms from the face and shoulders to pelvic and leg muscles typical of FSHD. Higher scores were assigned to patients with involvement of pelvic and proximal lower limb muscles [14]. The CSS was then corrected for the patient’s age at examination (aCSS):Age-corrected CSS = ((CSS × 2)/age at examination) × 1000 Before dividing by the age at examination, the severity score is multiplied by two to generate whole numbers. Then, the outcome of this calculation is multiplied by 1000 to improve the interpretation of the results and visualization in graphs [15]. ## 2.3. Ophthalmological Assessment All patients underwent a full ophthalmologic evaluation including best corrected visual acuity (BCVA), anterior segment slit lamp biomicroscopy examination, tonometry, and fundus ophthalmoscopy after pupil dilation. Colour fundus photography and 3 mm × 3 mm OCT-A scans (320 × 320 pixels, 24-bit RGB) of the superficial (layer 1) and deep capillary (layer 2) were acquired for each patient using a DRI Triton Swept-Source OCT device (Topcon, Tokyo, Japan). The level of segmentation for each capillary plexus was automatically provided by the instrument. To detect the superficial capillary plexus (SCP), the upper segmentation line was situated at 2.6 µm under the inner limiting membrane (ILM), whereas the lower segmentation line was located 15.6 µm under the junction between the inner plexiform layer (IPL) and inner nuclear layer (INL). To identify the deep capillary plexus (DCP), segmentation lines were placed 15.6 µm under the junction between the IPL and INL and 70.2 µm under the junction between the IPL and INL. In case of incorrect automatic segmentation, segmentation boundaries were manually adjusted. Colour fundus photographs were qualitatively assessed by two independent graders, blinded for the patient characteristics, to score vessel tortuosity through a four-point grading scale (none, mild, moderate, or severe). ## 2.4. OCT-A Image Processing Each enrolled subject had both eyes scanned. However, in order to provide statistical sample independence [16], data from only one eye, randomly selected for each subject, were included in the analysis. OCT scans were preliminarily examined for the presence of artifacts and then processed to obtain quantitative parameters including vessel tortuosity, vessel density, FAZ area, and FAZ acircularity. Image processing was performed using a combination of Mathwork’s Matlab (MathWorks, Inc., Natick, MA, USA), Fiji [17], and other Fiji plugins as described in Figure 1. Before the image processing steps, all OCT scans were converted into grayscale. Matlab and ImageJ/Fiji integration was made possible using two other Fiji plugins [18,19]. The machine learning classification task was performed using Fiji’s Trainable Weka Segmentation plugin [20,21], a wrapper around a Java-based machine learning workbench called WEKA (Waikato environment for knowledge analysis), developed by the Machine Learning Group at the University of Waikato (Hamilton, New Zealand) [21]. We followed the procedure described by Goselink et al. [ 13] and Lee et al. [ 11]. Briefly, the training features selected were Gaussian blur, Sobel filter, Hessian, difference of Gaussian, and membrane projections (membrane thickness 1, membrane patch size 19, minimum sigma 1, maximum sigma 16). Training of the algorithm was performed on a randomly selected OCT-A image. Two distinct models were trained, one for the superficial retinal layer, and one for the deep retinal layer. Using the trained model(s), classification was performed for all the images in the dataset (patients and healthy subjects) using the FastRandomForest classifier, a multi-threaded version of random forest [22], initialized with 200 trees and 2 random features per node. The classifier’s output consists of a segmentation probability map highlighting the retinal structures detected as vessels. The probability map was converted into a binarized image in Matlab. ## 2.4.1. Tortuosity Index (TI) The binarized image was skeletonized (each white object in the binary image was converted to a single-pixel line) in Fiji, and the skeleton features (branch length, vertices positions, branch euclidean distance) calculated in Fiji were used to compute the tortuosity index, as defined in Lee et al. 2017 [11]. In detail, the actual length of each branch and the imaginary straight length between two branch nodes—points of connections—were marked and calculated. Then, vessel tortuosity was calculated as the sum of branch lengths divided by the sum of imaginary straight lines between branch nodes [11]. Vessel tortuosity = sum of actual branch lengths/sum of straight lengths between branch nodes. ## 2.4.2. Vessel Density Score (VD Score) From the binarized image, the VD score was calculated as a ratio of the number of pixels of the corresponding vascular tissue to the total number of pixels in the image, following the approach described in Minnella et al., 2019 [23]. ## 2.4.3. FAZ Area FAZ area calculations were performed on the binarized image, with a morphological closing on the image itself using a single-disk structuring element of fixed size (20 pixels) [24]. Conversion from measurements expressed in pixels in metric lengths and areas was performed considering a pixel transverse size of 9.37 µm [25]. ## 2.5. Statistical Analysis All statistical calculations were performed using OriginLabs’ Origin Pro 2016. A p-value < 0.05 was considered statistically significant. Values were expressed as frequencies (%), mean ± standard deviation (SD), or median (interquartile range, IQR) as appropriate. After checking for normality, a two-sample t-test was used to assess differences in patient and control measurements. ## 3.1. Population A total of 33 patients (15 males, 18 females, mean age 50.4 ± 17.4, ranging from 13 to 76 years) with a diagnosis of FSHD and 22 healthy subjects (8 males, 14 females, mean age 44.7 ± 11.3 years) were included in the analysis. A total of 66 eyes from the 33 patients were initially included in this study. All patients were clinically affected with a median of 3.5 points (range 1.5–5) on the 10-point CSS [14] and 148.1 points (±60.9, 46.2–333.3) on the aCSS [15]. The mean EcoRI fragment size was 22.3 kb (±6.1 SD) ranging from 10 to 35 kb. Clinical data are summarized in Table 1. ## 3.2. Ophthalmological Examination The mean BCVA was 0.9 (decimal) ± 0.2 standard deviation (SD) and intraocular pressure (IOP) was within normal values in all the examined eyes. ## 3.2.1. Colour Fundus Photography Tortuosity of the retinal arteries was observed in 48 ($71\%$) eyes: mild, moderate, and severe vascular tortuosity were found in 17, 25, and 6 eyes, respectively (Table 1, Figure 2) ## 3.2.2. Optical Coherence Tomography Angiography A random sampling was performed in order to select a single (left or right), random eye OCT-A from the 132 available patient scans. For the deep layer, 33 patients (20 right eyes and 13 left eyes) and 22 healthy subjects (11 right eyes and 11 left eyes) were selected for the following analyses, while for the superficial layer, 33 patients (16 right eyes and 17 left eyes) and 22 healthy subjects (14 right eyes and 8 left eyes) were included for a total of 110 OCT-A scans. ## 3.3. Machine Learning Results The machine learning method correctly identified the major vessels. A representative sample of the processing results for a patient and a control is shown in Figure 3. ## 3.3.1. Tortuosity Index The TI of the superficial layer (SCP) was increased in FSHD patients (mean 1.16 ± 0.01) in comparison to controls (mean 1.15 ± 0.01); (t [53] = 3.62, $p \leq 0.001$) (Figure 4A). The deep layer (DCP) showed a decrease (−0.07,) in the TI in FSHD (mean 1.17 ± 0.01) patients in comparison to controls (mean 1.24 ± 0.01) (t [53] = −23.5, $$p \leq 0.05$$) (Figure 4B). However, although statistically significant, the interpretation of this last result should take into consideration that the reliability of vessel length calculations could not be visually assessed in the deep layer (Figure 5). No significant correlations were found between the TI and clinical parameters. ## 3.3.2. Vessel Density Score The VD score in the SCP (Figure 6A) was increased in FSHD patients (mean 38.03 ± 4.32) compared to normal (mean 25.40 ± 1.58) controls (t [53] = 13.10, $$p \leq 0.0001$$). Similarly, the VD score in the DCP (Figure 6B) was increased in FSHD patients (mean 45.52 ± 2.37) compared to normal (mean 29.07 ± 1.88) controls (t [53] = 27.31, $$p \leq 0.0004$$). In addition, a significant correlation was found between age and vascular parameters. With increasing age, VD scores and the total number of vascular branches showed a decrease (r = −0.45, $$p \leq 0.008$$ and r = −0.51, $p \leq 0.001$, respectively) in the superficial layer, realistically for a progressive age-related vascular rarefaction. A moderate correlation between VD and EcoRI fragment length was identified as well ($r = 0.35$, $$p \leq 0.048$$). ## 3.3.3. Foveal Avascular Zone The FAZ was automatically delineated, and its area was calculated considering a pixel size of 9.375 µm. As an example, we show in Figure 7 some cases of FAZ calculations. For the SCP, no statistically significant differences were found between FSHD patients (mean 0.29 ± 0.12) and healthy controls in FAZ areas (mean 0.34 ± 0.13) (Figure 7A). FSHD patients showed a sex difference, with the FAZ area being larger in females than in males (0.33 mm2 vs. 0.22 mm2; t [31] = −3.2, $$p \leq 0.003$$), in SCP only. The FAZ area of the SCP showed a positive correlation with CSS ($r = 0.55$ $p \leq 0.001$). In the DCP, a decreased FAZ area (−0.39 mm2) was found in FSHD patients (mean 0.40 ± 0.16) in comparison to controls (mean 0.79 ± 0.26), t [53] = −6.89, $$p \leq 0.01$$) (Figure 7B). The results are summarized in Table 2. ## 4. Discussion The present study analysed retinal vascular involvement in FSHD, using fundus photographs and swept-source OCTA, in order to refine the ophthalmological phenotype of FSHD subjects, both in qualitative and quantitative terms. In our study, fundus photographs showed a high prevalence of retinal arterial tortuosity ($77\%$), confirming evidence in the literature [5,6,13]. However, these retinal vascular changes did not cause complaints or vision loss in any patient. The use of OCTA provided a more detailed insight into FSHD ophthalmological manifestations. The quantitative analysis of the TI, VD score, and FAZ area showed statistically significant differences between FSHD patients and controls. FSHD patients showed an increase in the TI of the SCP, a decrease in the TI in the DCP, an increase in the VD score of both SCP and DCP, and a decrease in the FAZ area in the DCP in comparison to controls. Interestingly, a gender difference was found in the FAZ area (SCP), with higher values in females. Currently, there is no consensus in the literature about the effects of gender on retinal vascularity. VD and perfusion density (PD) seem to be not affected by sex [26]. The sole parameter potentially affected by gender is the FAZ area, which is larger in females compared to males [27]. Our findings on the FAZ area in FSHD patients reflect those of the general population. In the present study, we found that, with increasing age, VD and the total number of vascular branches showed a decrease in FSHD patients. Age can have an impact on vascular results, considering that elderly subjects usually present a vascular rarefaction of the capillary plexa. However, the FSHD population (tendentially older than controls) presented an increased VD both in the SCP and DCP. Our data on the TI support what was found by Goselink et al. [ 13]: the absence of smooth muscle in the capillary vessel wall as opposed to retinal arterioles could provide a possible explanation for the TI increase in the superficial layers. Our results are instead novel regarding VD scores. A possible molecular link between FSHD pathophysiology and neoangiogenesis, plausibly responsible for the increase in VD, is provided by the upregulation of several genes involved in neovessel formation [28], and in particular of the Wnt/Frizzled signalling pathway in the skeletal muscle of patients with active disease [29]. Further studies will be needed to confirm the relevance of this or other molecular pathways to the reported retinal vascular proliferation [30]. The FAZ area decreasing in DCP was likely related to the VD increase. The FAZ area is inversely connected to the microcirculatory condition, as demonstrated by the FAZ area enlargement in diabetic retinopathy and retinal vascular occlusive diseases for the destruction of the vascular arcades [31]. In addition, our study is noteworthy for validating the application of a complex toolchain of AI using both ImageJ and Matlab to OCTA angiograms. This pipeline could be replicated and applied in other studies. ## Considerations of AI and Study Limitations While the FAZ area and VD calculations have shown a very good tolerance to image quality and artefacts, TI figures should be interpreted with the greatest care. We strictly followed the approach by Goselink et al. [ 13] for the TI calculations, in order to be able to compare our results to previous works. However, we have already raised some concerns [32] about the reliability of this method on the SCP. Specifically, we noticed that TI calculation reliability may be affected by the quality of segmentation in a somehow unpredictable manner. Even more, regarding the DCP, we must highlight that the presence of a complex lattice of smaller capillaries turns into the detection of an extremely fragmented short vessel network, a situation where tortuosity computation by itself may be rendered meaningless. Moreover, in the skeletonization process, the lengths of the vessels are calculated. However, this process calculates the lengths of the “branches”, which are the lines between the dots along the vessels, and therefore does not calculate the total length of the vessels, as a human evaluator would do. The relatively small sample size is also a partial limitation of this study. In addition, we included only FSHD type 1 patients, but exploring possible similarities or differences in ophthalmological characteristics of FSHD type 2 patients could be worthwhile. Further studies would be needed to clarify the potential correlations between the quantitative OCTA parameters and the severity of the muscular disease, and to determine which patients may have or develop more severe disease [7]. Thus, OCTA could become an important tool for the routine ophthalmological evaluation of FSHD patients. ## 5. Conclusions The importance of assessing ophthalmological abnormalities in FSHD is not restricted to the possibility of treatable vision loss. Looking at retinal vasculopathy as an integral part of FSHD opens new horizons, helping to understand the pathogenesis of the disease. OCTA is a non-invasive tool potentially useful to assess retinal vasculopathy and to provide promising disease biomarkers. The identification of new parameters potentially associated with prognosis appears pivotal in neuromuscular disorders such as FSHD where the extent and severity of involvement can vary enormously.
# The Use of Complementary and Alternative Medicine among Peritoneal Dialysis Patients at a Second-Level Hospital in Yucatán Mexico ## Abstract Background: Complementary and alternative medicine (CAM) is widely used for multiple reasons such as treatment of diseases and their symptoms, empowerment, self-care, disease prevention, dissatisfaction, adverse effects or cost of conventional medicine, perception of compatibility with beliefs, and idiosyncrasy. This study investigated CAM use in patients with chronic kidney disease (CKD) undergoing peritoneal dialysis (PD). Methods: A cross-sectional survey study was conducted with 240 eligible patients with CKD in the PD program. By applying the I-CAM-Q-questionnaire, the frequency, level of satisfaction, and reasons for CAM use were explored, and the demographic and clinical data of users and non-users were analyzed. Data analysis included descriptive analysis, Student’s t-test, Mann-Whitney U, chi-square, and Fisher tests. Results: The main types of CAM used were herbal medicine, and chamomile was the most commonly used. To improve the state of well-being was the main reason for use, the attributable benefit of CAM was high, and only a low percentage of users reported side effects. Only $31.8\%$ of the users informed their physicians. Conclusion: The use of CAM is popular among renal patients, and physicians are not adequately informed; in particular, the CAM type ingested represents a risk for drug interactions and toxicity. ## 1. Introduction Chronic kidney disease (CKD) is a condition that leads to disability, decreased quality of life, and substantial social and financial costs. It is now recognized as a global public health priority that has reached epidemic proportions worldwide, with a consequent impact on morbidity and mortality and cost to the health system. In 2017, the global prevalence of CKD was $9.1\%$, and in 2019, the Pan American Health Organization estimated that it is a leading cause of disease burden, categorized as the 8th cause of mortality and the 10th cause of years of life lost in both sexes [1,2]. CKD is more prevalent in people with obesity, hypertension, and/or diabetes mellitus as well as elderly people, women, and racial minorities and is expected to increase, including the stage with the requirement of a renal replacement therapy, namely peritoneal dialysis (PD) [3,4]. Approximately $80\%$ of the world’s population uses complementary and alternative medicine (CAM) to maintain their health [5,6]. The use of CAM by the population has experienced significant growth in the last 15 years, with consequent medical, economic, and sociological impacts; this increase is especially evident in individuals with chronic diseases. Patients mention using it for multiple reasons, such as for the treatment of diseases and their symptoms, but also for the maintenance of health, empowerment, self-care, disease prevention, improvement of quality-of-life dissatisfaction with allopathic medicine, adverse effects of the medications, and their cost. Additional reasons include the need for the patient to control the evolution of their disease and the perception of compatibility of the use of CAM with their values, beliefs, and idiosyncrasy [7,8,9,10]. Studies conducted to date suggest that adult patients with chronic diseases such as diabetes mellitus, systemic arterial hypertension, chronic kidney disease, cancer, chronic obstructive pulmonary disease, and rheumatoid arthritis, among others, are more prone to the use of CAM as part of self-care and management of their condition and are more likely to use CAM at greater comorbidity [11,12,13,14,15]. Doctors are often inadequately informed by their patients about their CAM use; for example, only $18\%$ of Polish cancer patients discussed using CAM with a doctor [16] in contrast with the $60\%$ of pediatric oncology patients from Switzerland that discussed use of CAM with their oncologist [17] and the $31.3\%$ of Colombian rheumatic patients who use CAM and informed their rheumatologists because of fear of retaliation [18]. In Mexico, only a few studies have been carried out that described the use of CAM in this population. Carmona-Sanchez reported use of CAM to treat various digestive disorders (prevalence 28–$51\%$) [19]; Herrera-Arellano et al. reported that $73.4\%$ of HIV-positive patients were users of some type of CAM [20], and the cactus—nopal—was the most common indigenous remedy used to treat diabetes mellitus ($73.1\%$). The geographic region of the Yucatán Peninsula was the region of founding of the Mayan culture, which reached an important degree of development in the field of traditional medicine, characterized using various preparations of plants, animals, and minerals with curative action by traditional doctors or healers; in addition, the exercise of Mayan medicine was entrusted to three specialists of different ranks: the h-men (priest), dza-dzac (the one who heals with herbs), and the pulyah (sorcerer) [21]. Mayan medicine continues to be used both in rural and urban-zone populations of the Yucatán Peninsula. The use of CAM in patients with chronic diseases has become an interesting topic for academics and for medical doctors. Patients with chronic diseases generally have more than one type of ailment, so it is very important that physicians know and understand the reasons that influence patients to use any type of CAM, with the aim to be their guide in decisions to better treatments and consequently aid health care systems. The absence of adequate information from patients to physicians may also be related to social perceptions [22,23]. CAM is widely used in the general population, and its use in patients with CKD has been only slightly investigated worldwide. Patients with PD may be more likely to be users of CAM in view of the chronicity of their disease and the various comorbidities. Moreover, this group of patients could be at an increased risk of drug interactions, toxicity, or electrolyte disturbances owing to the absence of renal excretory function. The prevalence and types of CAM used among Mexican CKD patients are unknown. Therefore, the present survey was designed to document the frequency and types of CAM usage in patients with CKD treated with PD who attended at a regional hospital in Yucatán, Mexico, using a validated questionnaire. This study also investigated the report to their doctors about the use of CAM, reason for use, and perception of the benefit of patients. ## 2.1. Study Design and Setting This is a cross-sectional study. Data collection was conducted among patients with CKD who received PD and were invited to participate for a phone call, and in the medical appointment follow-up, the patients were interviewed face-to-face by two experienced researchers. The study was conducted at the Dialysis Clinic in the Regional General Hospital “Ignacio García Téllez” of the Instituto Mexicano del Seguro Social (IMSS), a second-level hospital located in Mérida Yucatán, Mexico, from November to December 2018. ## 2.2. Study Population A total of 319 patients from the PD program of the Ignacio García Téllez Hospital were invited to participate in this study ($$n = 319$$). The inclusion criteria were an age older than 18 years, with at least one clinical evaluation by a nephrologist of the PD program within the last two months, and who agreed to participate with verbal answers to the questionnaire by interview. The exclusion criteria were as follows: renal transplant, age < 18 years old, refusal to participate, undergoing hemodialysis, deceased during the study period, and did not attend the appointment. ## 2.3. Sample Size The sample size for survey was estimated using an $18\%$ anticipated prevalence of use of CAM for dialysis patients [24], and with a $95\%$ level of confidence and a $5\%$ margin of error, the estimated sample size was 227 (Epi Info v. 7.2 CDC, Atlanta, GA, USA). The final sample was comprised of 240 PD patients from our hospital (Figure 1). ## 2.4. Research Instrument The data were collected by a questionnaire previously reported and validated and known as I-CAM-Q (the International Complementary and Alternative Medicine Questionnaire), which includes four parts: (a) examination by health provider, (b) complementary treatments, (c) use of herbal medicine and dietary supplements, and (d) self-help practices [25,26] (Supplementary Materials). ## 2.5. Clinical Characteristics and Biochemical Parameters of Patients The clinical characteristics of patients were obtained during the medical appointment by clinical exploration. The biochemical parameters of the patients were obtained from their clinical appointment and corresponded to previous bimonthly medical appointments. After at least 12 h fasting, venous blood samples were collected to measure the complete blood count and various biochemical components (glucose, creatinine, urea, uric acid, albumin, calcium, phosphorus, and potassium). ## 2.6. Ethics The project was approved by the local committee of Investigation and Ethic 3201 of Regional General Hospital Ignacio García Tellez IMSS (registered number R-2018-3201-26). The participants were given information about the aim of the study and the content of the questionnaire. Informed consent was obtained from all patients before confirming their participation in this investigation. ## 2.7. Data Analysis Continuous variables are expressed as arithmetic mean ±1 standard deviation (±1SD), and categorical variables are presented as frequencies and percentages. For comparison and analysis, the study population was divided into two groups: CAM users and non-CAM users. Continuous variables were analyzed using the Student’s t-test or Mann-Whitney’s U test depending on the normality distribution of the data, while the categorical variables of the two groups were analyzed using the chi-square test or Fisher’s exact test. Differences were considered statistically significant at p-value < 0.05. ## 3.1. Use of CAM The sociodemographic characteristics of patients are shown in Table 1. The frequency of CAM use in the study population was $55.0\%$ ($\frac{132}{240}$), of which $50.8\%$ ($\frac{67}{132}$) were male and $49.2\%$ ($\frac{65}{132}$) women. No statistically significant differences were observed between the sociodemographic characteristics of the CAM users and non-users (Table 1). ## 3.2. Type of CAM The most common type of CAM used by patients was herbal medicine ($50.0\%$, $\frac{66}{132}$), followed by mind-body practices such as music therapy ($24.2\%$, $\frac{32}{132}$) and relaxation techniques ($18.9\%$, $\frac{25}{132}$) (Table 2). Most of our study population ($65.1\%$; $\frac{86}{132}$) referred to using only one type of CAM, while $34.8\%$ ($\frac{46}{132}$) of patients used more than one CAM in combination; the majority of them used two types of CAM ($67.4\%$; $\frac{31}{46}$), followed by three ($28.3\%$; $\frac{13}{46}$). The most frequent combination of CAM was herbal medicine and music therapy, followed by herbal medicine and spiritual healing. The distribution of the combinations of CAM types used by the study population is shown in Figure 2. Regarding herbal medicine, the patients used more than two types of herbal products ($40.9\%$; $\frac{27}{66}$), such as teas, referencing a total 36 different types of herbs and natural products. The most frequently used was chamomile with $25.8\%$ ($\frac{25}{97}$), followed by moringa leaves with $16.0\%$ ($\frac{15}{97}$), chaya leaves with $8.5\%$ ($\frac{8}{97}$), and orange leaves with $6.2\%$ ($\frac{6}{97}$) (Table 3). ## 3.3. Reason to Use CAM The most frequent reason for using CAM was to improve well-being ($71.7\%$; $\frac{132}{184}$). This reason was referred to by $100\%$ of practitioners of relaxation techniques ($\frac{25}{25}$), meditation ($\frac{9}{9}$), and Tai Ji Quan ($\frac{2}{2}$); $85.7\%$ of practitioners of music therapy ($\frac{30}{35}$); $83.6\%$ of those who attended healing ceremonies ($\frac{5}{6}$); and $83.3\%$ of spiritual healing practitioners ($\frac{10}{12}$). The second reason for using CAM was for chronic health problems, which was the most frequent answer among herbal medicine users ($40.9\%$, $\frac{27}{66}$) (Figure 3). ## 3.4. Perception of Benefit of Using CAM Regarding the question the benefits attributed to the use of CAM, of the 184 responses, the majority indicated its use was very beneficial ($73.3\%$; $\frac{135}{184}$). This was specifically reported by users of relaxation techniques ($92.0\%$; $\frac{23}{25}$) and music therapy ($90.6\%$; $\frac{29}{32}$), those who take vitamins ($90.0\%$; $\frac{9}{10}$), and those who engage in spiritual healing $75.0\%$ ($\frac{9}{12}$). In contrast, fewer patients consuming herbal plants ($56.1\%$; $\frac{37}{66}$) indicated their use was very beneficial (Figure 4). ## 3.5. Adverse Effects and Starting Time of CAM Use Overall, $97.0\%$ ($\frac{128}{132}$) of CAM users stated that their use did not cause side effects, while the remaining $3\%$ ($\frac{4}{132}$) reported secondary effects on gastrointestinal tract ($\frac{2}{4}$) and nervous system ($\frac{2}{4}$). Regarding the start time of their utilization of CAM, $54.5\%$ ($\frac{72}{132}$) of patients noted prior use, and $45.5\%$ ($\frac{60}{132}$) began use after starting treatment with peritoneal dialysis. ## 3.6. Recommending the Use of CAM The main sources of recommendation for use of CAM were family members ($40.1\%$; $\frac{53}{132}$), followed by friends ($22.7\%$; $\frac{30}{132}$), allopathic doctors ($8.3\%$; $\frac{11}{132}$), other health professionals ($6\%$ $\frac{8}{132}$), and marketing ($2.2\%$; $\frac{3}{132}$). ## 3.7. Inform the Use of CAM to Medical Doctor Only $31.8\%$ ($\frac{42}{132}$) of CAM users reported to their doctors about use of CAM. The remaining $68.2\%$ ($\frac{90}{132}$) of the users did not report it for the following reasons: (a) the doctor did not ask ($72.2\%$, $\frac{65}{90}$), (b) the patients did not consider it necessary ($20.0\%$; $\frac{18}{90}$), (c) the patients did not provide information for fear of disapproval ($6.7\%$; $\frac{6}{90}$), and (d) the patients did not have medical assistance at the time they used CAM ($1.1\%$; $\frac{1}{90}$). ## 3.8. Clinical Characteristics and Biochemical Parameters of Patients The duration of PD therapy in the participating patients ranged from 1 to 168 months, with a mean of 27.4 months (±27.6), and no statistically significant difference was observed between the months of PD and CAM use or not ($$p \leq 0.412$$). The average volume of residual uresis in our study population ranged from 0 to 3000 mL, with a mean of 649 mL (±564), and no statistically significant difference was observed between the volume of residual uresis and the CAM users or non-users of CAM ($$p \leq 0.447$$). Table 4 and Table 5 display the clinical characteristics and biochemical parameters of CAM users and non-users, respectively. We did not find significant differences in clinical and biochemical characteristics between the two groups; only the diastolic pressure was significant significantly higher in CAM users (Table 5). On the other hand, most of the patients were overweight ($44.2\%$; $\frac{106}{240}$) or obese ($26.3\%$; $\frac{63}{240}$). According to levels of albumin and BMI, $72.9\%$ ($\frac{175}{240}$) had adequate nutritional status. Further, $60\%$ of patients ($\frac{144}{240}$) had a Karnosfsky index higher than 80 points, which suggests that they were able to independently carry out daily activities. The main etiology of CKD reported was diabetic nephropathy ($62.5\%$; $\frac{150}{240}$), and the patients had between two and seven comorbidities. ## 4. Discussion The use of CAM has increased in recent decades, mainly for the prevention and management of chronic diseases and the well-being needs of the older population. In recent years, the WHO has implemented a strategy for integration, validation, and safety to harness the potential contribution of CAM to health, wellness, and people-centered health care [27]. In Mexico, native peoples have a wide tradition of CAM use. However, studies on CAM use in chronic diseases are scarce. Our study explored the prevalence of CAM use in patients with CKD treated with PD, the types and reasons for its usage, the perception of its benefit, and its adverse effects. Fifty-five percent of our study population was identified to be using CAM therapy; this finding was similar to the reports of a study in a German population, where $57\%$ of dialysis patients reported to be regular CAM consumers [28]. Studies in a Turkish ($72.5\%$) [29] population showed high frequencies of CAM use; on the other hand, studies in American ($18.0\%$) [24] patients reported lower frequencies. The use of CAM can vary by diverse demographic factors such as age, sex, educational status, socioeconomic status, and occupational status [30]; however, in our study, none of the demographic factors analyzed had a significant influence on CAM. Women are more likely to use CAM than men [31], but we did not find a sex effect on CAM use for CKD in our study. However, studies in patients with kidney disease showed that both men and women are likely to use CAM; in studies in Egyptian [32] and Indian [33] patients, men were more likely to use CAM, whereas studies in Saudi patients showed that women are more likely [34]. Regarding the activity of our patients that identified as CAM users, $24\%$ were housewives, followed by retirees ($17.5\%$), patients with only primary education, and patients with medium-low ($21.3\%$) and medium ($22.1\%$) socioeconomic levels. Medicinal plants are part of the therapeutic resources of traditional pre-Hispanic medicine in Mexico, and these are culturally and historically popular [35], so it is not unusual for herbal medicine ($50.0\%$) to be the most common type of CAM used by patients. Similar findings were reported in American ($67.8\%$) [24] and Turkish ($76.9\%$) [29] patients; however, it differs from that reported in German patients, whose most common type of CAM was mineral supplements [28]. Occasionally, factors such as culture, history, idiosyncrasies, and beliefs influence the use of different CAM types [36]. On the other hand, $34.8\%$ ($\frac{46}{132}$) of our patients employed more than one type of CAM-even up to six CAM; in German patients, $27.0\%$ employed more than one CAM, and patients reported using up to five CAM. Herbal medicines can include a variety of potentially hepatotoxic compounds: (a) natural products such as volatile compounds, glycosides, terpenoids, alkaloids, anthraquinones, phenolics compounds, and other toxins; (b) contaminants or adulterants such as metals, mycotoxins, and pesticides; and herbicidal residues, and their mechanism to induce hepatotoxicity remains mostly imprecise in many cases [37]. In addition, herbal medicines can carry a variety of nephrotoxic compounds such as organic acids, alkaloids, terpenes, lactones, saponins, indeed minerals, and toxic proteins [38]. The use of herbal medicines by CKD patients is especially detrimental because of hepatotoxicity and nephrotoxicity, hemodynamic changes, electrolyte abnormalities, and effects on blood pressure, blood glucose, and coagulation parameters [29,30]. With the increasing of use of herbal medicines, there is a need to monitor and study their safety, especially in patients with CKD. In fact, the WHO recommends including the herbal medicine pharmacovigilance systems [39]. Chamomile ($25.8\%$, $\frac{25}{97}$) and moringa leaves ($16.0\%$, $\frac{15}{97}$) were the most common herbs used by patients, and various studies have shown the beneficial effects and low side effects of both plants [40,41]. Improving well-being ($71.7\%$; $\frac{132}{184}$) was the most frequent reason for using CAM in this study, unlike American patients, who use CAM to improve their energy and concentration [24], and German and Turkish patients, who use it to strengthen their immune system [28,29]. The majority of CAM ($73.3\%$; $\frac{135}{184}$) referred to by patients was considered as beneficial, which is similar to the report by Duncan et al. in American patients ($77.8\%$) [24] but less so in Turkish patients ($95.5\%$) [29]. With respect to side effects, $95.4\%$ ($\frac{126}{132}$) of CAM users did not present adverse effects; however, in a Turkish study, a smaller number of patients ($77.3\%$) did not experience side effects of CAM, probably due to the fact that Turkish patients used more herbal plants, or the plants employed by the patients had undesirable effects [29]. In our study, similar to that reported by Uzdil and Kılıç, the majority of people who recommended CAM were family members and friends. In addition, this investigation reported that $81.6\%$ of patients recommended CAM to another person [29]. A low number of patients ($31.8\%$; $\frac{42}{132}$) informed their physicians about CAM consumption compared to German ($59\%$) [28] and American ($36.8\%$) [24] patients. Physicians not asking patients about the use of CAM was the main reason for patients not informing physicians, which reflects the poor interest of medical doctors in the use of CAM. This interest needs to be improved because, as shown before, herbal plants are the most common type of CAM referred by patients, and CKD patients use many drugs for different complications at the same time, and interactions between drugs and herbs may mimic, decrease, or increase the action of prescribed drugs [30,42]. Improving patient–physician communication is essential for positive health outcomes. The lack of adequate discussion about CAM use raises the risk of adverse effects, including interactions with conventional treatments, which could be related to social perceptions [22,43]. All patients included in the study had clinical and biochemical characteristics; previous studies in the literature did not consider these parameters; therefore, we considered these as contributions. Most participants were overweight or obese, and increasing evidence suggests that obesity is a risk factor for diabetes and CKD, and high BMI has been reported to be related to diabetic nephropathy [44]. These data are consistent with the findings of our study, for which the main etiology of CKD was diabetic nephropathy ($62.5\%$; $\frac{150}{240}$). According to levels of serum albumin and BMI, $72.9\%$ ($\frac{175}{240}$) of patients showed good nutritional status; in addition, other biochemical parameters were analyzed, such as hemoglobin, urea, creatinine, glucose, uric acid, calcium, phosphorus, and potassium, and we did not observe significant differences between users and non-users of CAM. These results indicate that CAM use does not have a negative effect on the health of patients with CKD. In addition, no significant differences were observed in either group with respect to edema grade or systolic pressure, suggesting that the use of CAM is not associated with changes in fluid status in patients with CKD on PD. In contrast, the diastolic pressure was significantly higher in CAM users; however, we believe that this is not clinically relevant. The patients in our study had between two and seven comorbidities such as acute myocardial infarction, heart failure, peripheral vascular disease, dementia, chronic lung disease, connective tissue diseases, peptic ulcer disease, liver diseases, HIV, and diabetes mellitus, and according to the comorbidity scale, no significant differences were observed between users (score = 3) and non-users (score = 3.1) of CAM. Contrary to other studies, CAM users have a greater number of diseases [28,45]. This investigation has limitations: as a cross-sectional design, the conclusions drawn from the study cannot suggest causation and only included patients from the unique dialysis clinic of one hospital; therefore, our results may not reveal CAM use in other provinces considering the wide difference in culture, beliefs, and idiosyncrasies of Mexico. Despite these limitations, our results provide an important new understanding, and to the best of our knowledge, this is the first study on the use of CAM in CKD patients in Mexico. ## 5. Conclusions The use of CAM is popular among renal patients on PD ($55\%$), with the main type of herbal medicine being chamomile, followed by relaxation as part of the practice of mind and body techniques. The main reason for the use of CAM in our patients was to improve their state of well-being, and only $3\%$ of users reported side effects. Just as $31.8\%$ of the users of CAM informed their doctor, we need continued research and education to identify and break down barriers to the communication of CAM-use topics between patient and doctor, as this is mandatory.
# Association of Body Mass Index (BMI) with Lip Morphology Characteristics: A Cross-Sectional Study Based on Chinese Population ## Abstract Background: Lip morphology is essential in diagnosis and treatment of orthodontics and orthognathic surgery to ensure facial aesthetics. Body mass index (BMI) has proved to have influence on facial soft tissue thickness, but its relationship with lip morphology is unclear. This study aimed to evaluate the association between BMI and lip morphology characteristics (LMCs) and thus provide information for personalized treatment. Methods: A cross-sectional study consisted of 1185 patients from 1 January 2010 to 31 December 2020 was conducted. Confounders of demography, dental features, skeletal parameters and LMCs were adjusted by multivariable linear regression to identify the association between BMI and LMCs. Group differences were evaluated with two-samples t-test and one-way ANOVA test. Mediation analysis was used for indirect effects assessment. Results: After adjusting for confounders, BMI is independently associated with upper lip length (0.039, [0.002–0.075]), soft pogonion thickness (0.120, [0.073–0.168]), inferior sulcus depth (0.040, [0.018–0.063]), lower lip length (0.208, [0.139–0.276]), and curve fitting revealed non-linearity to BMI in obese patients. Mediation analysis found BMI was associated with superior sulcus depth and basic upper lip thickness through upper lip length. Conclusions: BMI is positively associated with LMCs, except for nasolabial angle as negatively, while obese patients reverse or weaken these associations. ## 1. Introduction Soft tissue aesthetics, as the major motivation of patients seeking orthodontic and orthognathic treatment, has become an important concern in orthodontic treatment planning [1,2,3]. Therefore, it is of great value to explore the potential influencing factors related to soft tissue aesthetics and to make personalized treatment plans for patients. Recent studies have paid increasing attention to lip profile, as it has been proved to be a key feature affecting facial esthetic perception [4]. However, increasing evidence has shown that, in addition to hard tissue morphology, soft tissue morphology is affected by many factors, including heredity and environment (race, age, gender, etc.). Specifically, compared with females, males were found to have more prominent and thicker lips [5]. Besides, dental features such as dental crowding, occlusal relationship, and especially the incisor position, also have an impact, with the anterior and posterior position of the upper incisor proving to be closely related to upper lip thickness [6]. Moreover, recent study has demonstrated that the upper lip morphology varies significantly between different skeletal patterns [7]. Body Mass Index (BMI), a commonly used value to measure the body shape and health status, is calculated as weight in kg divided by height in m squared [8]. Previous studies have found that BMI is associated with various systemic diseases [9]. Concerning oral health, some evidence exists that there might be an association between increased BMI and an increased risk for caries [10,11], periodontal diseases [12], root dilaceration [13] and less cooperation [14]. In addition, it has been proved that obesity can affect facial bone and soft tissue structures by affecting growth and development, bone metabolism and fat distribution [15]. For example, mandibular growth and lower facial height were found to be significantly associated with BMI [16]. Overweight subjects were found to have larger maxillary width and obese people were found to have larger maxillary length and mandibular width [17]. Besides, many studies have determined that BMI is one of the key factors affecting facial soft tissue thickness (FSTT) [18,19], which increased with increase in BMI. Specifically, overweight subjects were found to have thicker nasion soft tissue, whereas obese subjects were found to have thicker pogonion, glabella and gnathion soft tissue. In recent years, research on lip aesthetics has been carried out worldwide and standard values have been established in some populations [20,21]. Previous studies have mainly explored facial soft tissue characteristics based on age, gender, race, and skeletal patterns. Recent stereophotogrammetric analysis first reported the association between larger BMI and increased linear lip measurements, which may suggest that increases in BMI were associated with directional lip stretch [22]. Current studies on the relationship between BMI and facial soft tissue mainly focus on its thickness (FSTT). There are few studies concerning lip morphology, and the relationship between BMI and lip morphology remains unclear. In addition, many existing studies have defects such as limited sample size or imperfect statistical methods. Given the nonnegligible influence of BMI on facial soft tissue and bone structures, and the importance of lip morphology in aesthetic considerations for orthodontic treatment, it is necessary to explore the clear association between BMI and lip morphology, to provide a basis for more accurate diagnosis and to help orthodontists develop more personalized aesthetic treatment plans for patients with different BMI. Therefore, a statistically well-designed study with a larger sample size has been conducted in a Chinese population, aimed to (a) explore the average value of lip morphology characteristics and reference value for BMI in Chinese population; (b) compare lip morphology characteristics among four BMI categories; (c) to explore the lip characteristics which are independently affected by BMI by adjusting for various confounding variables; and (d) to explore the mediators between BMI and lip characteristics. The null hypothesis of this study was that lip morphology characteristics did not differ significantly between four BMI categories. ## 2.1. Study Population and Data Collection The study was a cross-sectional study, which was reported following the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guideline [23]. Figure 1 shows the flow chart of the analysis process and research contents. Patients who received consecutive orthodontic treatment in the Department of Orthodontics, West China Hospital of Stomatology, Chengdu, Sichuan, from 1 January 2010 to 31 December 2020, were identified retrospectively. The exclusion criteria were: (a) participants aged < 12 y; (b) participants with a history of orthodontic treatment; (c) participants who do not have permanent dentition; (d) participants without the required complete baseline information. Prior to orthodontic treatment, all participants received a series of examinations, including demographic questionnaires, intraoral and facial photographs, plaster, digital dental models and radiographic examinations. The above data were collected for subsequent analysis, and informed consent was obtained from all adult participants and the guardian of each minor. Lateral cephalometric radiographs were taken using the same device (Veraviewepocs, Morita, Kyoto, Japan). Patients were in their natural head position and centric occlusion, and were instructed to stay relaxed and not compress their lips during exposure. Pre-treatment cephalometric radiographs were imported into Dolphin imaging software version 11.9.07.23 (Patterson Dental, Los Angeles, CA, USA), independently traced and measured by two experienced orthodontists, the mean values of which were used for subsequent analyses. A total of 7 linear parameters and one angular parameter were measured using cephalometric landmarks. Landmarks, reference lines and measurements used in this study are shown in Figure 2A. The cephalometric landmarks are defined in Table 1. ## 2.2. Lip Characteristics and BMI Lip morphology characteristics (LMCs) were described using eight indices: nasolabial angle (NLA), superior sulcus depth (SSD), basic upper lip thickness (BULT), upper lip thickness (ULT), upper lip length (ULL), soft pogonion thickness (SPT), inferior sulcus depth (ISD) and lower lip length (LLL). Specifically, (a) NLA is the intersection angle between the line Cm-Sn and the line Sn-UL; (b) SSD is the distance from the most concave point of upper lip to the line perpendicular to the Frankfort (FH) plane (the line Or-Po); (c) BULT is the distance between point Sn and the point 3 mm below point A; (d) ULT is the distance from point UL to the labial surface of the upper central incisor; (e) ULL is the distance between two parallel lines in the FH plane through point Sn and point Stms; (f) SPT is the distance between point Pog and Pog’; (g) ISD is the vertical distance from point Si to the line LL-Pog’; (h) LLL is the distance between two parallel lines in the FH plane through point Stmi and point Me’. BMI was calculated using objectively measured height and weight records from demographic questionnaires prior to orthodontic treatment. According to the World Health Organization standards, BMI can be divided into the following four categories: underweight (BMI < 18.5 kg/m2); normal weight (18.5 ≤ BMI < 25.0 kg/m2); overweight (25.0 ≤ BMI < 30 kg/m2) and general obesity (BMI ≥ 30 kg/m2). Figure 2B shows a representative underweight patient profile and Figure 2C shows a representative overweight patient profile. ## 2.3. Covariates Demographic information on age and gender was obtained from the medical record system of the hospital. Dental features including crowding, molar relationship, overbite, and overjet were assessed based on intraoral photographs, dental models, and medical examination records at the first visit. Molar relationship is diagnosed as class I when the mesi-obuccal cusp of the upper first molar (U6) occludes with the buccal groove of the lower first molar (L6), as class II-1 when U6 is mesial to L6 and upper incisors proclined and as class II-2 when upper incisors retroclined (U1-SN < 100°), and as class III when U6 is distal to L6. In addition, crowding evaluated both in upper and lower dentition is defined as I (<4 mm), II (4~8 mm), and III (≥8 mm). Cephalometric indices on skeletal and incisor parameters were obtained, mainly including the development and relative position of the jaw (SNA, SNB, ANB, SN-MP, FH-MP), as well as the inclination and position of the central incisor (U1-NA, U1-SN, L1-NB, L1-MP). To control for potentially confounding effects and make the results convincing, the above variables were adjusted for covariates in our study, according to previously literature [7], which apparently have an effect on the appearance characteristics of soft tissues. Besides, the eight LMCs were also considered as confounders because they may influence each other. ## 2.4. Statistical Analysis Categorical variables were expressed as frequencies (percentages) and continuous variables were described as means (standard deviations, SDs) or medians (interquartile ranges, IQRs). Demographic and clinical characteristics were presented according to BMI categories and compared by one-way ANOVA test and Chi-square test as appropriate. Spearman correlation analysis was used to investigate the relationships of eight LCMs to each other. Multivariable linear regression models were used to explore the association between BMI with LCMs when considering confounders. Among this process, four models were adjusted by confounders: model 1 for basic diagnosis information (“Age”, “Gender”, “Molar Relationship”, “Upper crowding”, “Lower crowding”, “Overbite” and “Overjet”), model 2 for anterior teeth and skeletal information (“SNA”, “SNB”, “ANB”, “SN-MP”, “FH-MP”, “U1-NA (mm)”, “U1-SN” “L1-NB (mm)” and “L1-MP”), model 3 for LMCs themselves (NLA, SSD, BULT, ULT, ULL, SPT, ISD and LLL), and model 4 for all of these. Adjusted LCMs value were compared among four BMI categories by two-samples t-test and one-way ANOVA test. Loess (Local Polynomial Regression Fitting) was employed to describe the variation tendency of eight adjusted LCMs with the BMI. The multi-variable linear regression results were expressed as coefficient and $95\%$ confidence interval (CI). Regression-based mediation analysis was used to distinguish the direct effect of BMI on LMCs, and the indirect effect mediated by ULL. Three estimates were obtained as follows: (a) total effect, i.e., the overall association between BMI and LMCs, including direct and indirect effects; (b) direct effect, the association between BMI and LMCs; and (c) indirect effect, the association between BMI and LMCs, mediated by ULL. All statistical analyses and results’ visualization were performed using R software (version 4.1.2). p-value < 0.05 was considered statistically significant. ## 3.1. Study Participant Characteristics A total of 2079 patients were initially identified, with 1185 remaining as final participants after applying exclusion criteria. Among the 1185 including participants, the eight LMCs were normally distributed according to Kolmogorov-Smirnov tests ($p \leq 0.05$), which is a natural distribution of the data in the real world (Figure 3A). Table 2 shows the mean and standard deviation of each LMC and other clinical information in detail. Except for NLA-SPT and ULT-ULL, all other LMC pairs had significant correlations ($p \leq 0.05$), confirming that they are closely related and influence each other (Figure 3B). The BMI was not normally distributed according to Kolmogorov-Smirnov tests ($p \leq 0.05$), with a mean of 19.41 ± 2.93 kg/m2 and mediation of 19.03 (17.58–20.08) (Figure 3C), leading to the speculation that this is due to increasing obesity. As for BMI categories, normal weight was the most prevalent with a percentage of $55.27\%$, underweight was the second most prevalent ($40.93\%$), overweight was third ($2.87\%$), and obese was the least frequent ($0.93\%$) (Figure 3D). ## 3.2. Associations between BMI and Lip Characteristics The univariate analysis between BMI and LMCs indicated that NLA was negatively correlated with BMI, and the other LMCs were positively correlated with BMI (Figure 3E). However, this analysis did not consider confounders including demographic information, dental features, cephalometric indices on skeletal and incisor parameters and LMCs. To explore whether BMI independently affected LMCs, multivariable linear regression models were used, and the results are reported in Table 3. The result show that NLA was negatively correlated with BMI (model 1: β= −0.260, $95\%$ CI −0.465 to −0.056; model 2: β= −0.342, $95\%$ CI −0.529 to −0.154; model 3: β= −0.177, $95\%$ CI −0.337 to −0.017), but not significantly after adjusting all considered covariates in model 4. On the contrary, SSD was positively correlated with BMI (model 1: β = 0.059, $95\%$ CI 0.016 to 0.102; model 2: β= 0.044, $95\%$ CI 0.007 to 0.080), but not significantly after adjusting LMCs or all covariates (model 3 and model 4). The same trend as for SSD can be seen in BULT and ULT, indicating that BMI did not independently influence these, but rather via other routes. The remaining four LMCs, ULL, SPT, ISD and LLL, had a significant positive correlation with BMI in all four models, suggesting that BMI is an independent factor influencing these four LMCs after adjusting for confounders. ## 3.3. Tendency of Lip Characteristics with BMI Variation To fully understand how LMCs vary with BMI, the scatter plot of the correlation between LMCs and BMI was made after adjusting for all covariates by multivariable linear regression (Figure 4). The local polynomial regression fitting of the blue line revealed the true variation tendency of LMCs according to the BMI. The gray dashed line was fitted by linear regression, revealing the general trend. The result shows that NLA was negatively correlated with BMI, while the other LMCs were positively correlated with BMI, which is in accordance with univariable analysis. Interestingly, the linear relationship did not hold in the obese category, which seemed to weaken or reverse the association. Furthermore, sensitivity analysis of BMI categories demonstrated similar results (Figure 5). Overweight had the smallest value of NLA and the biggest value for SSD, BULT, ULT, ULL, ISD and LLL. Only the SPT was gradually increased by BMI categories. These results suggest that obese patients have a different pattern of effects on LMCs, and the linear relationship between BMI and LMCs may only remain in non-obese patients. ## 3.4. Mediation Analysis Mediation analyses were performed to investigate why and how BMI related to NLA, SSD, BULT and ULT, while BMI is not an independent factor. The result shows that total effects (0.014, $p \leq 0.05$; 0.031, $$p \leq 0.014$$; respectively) of BMI toward SSD and BULT consisted of direct effect (0.005, 0.020 respectively, $p \leq 0.05$) and indirect effect (0.009 = 0.046 × 0.191, $$p \leq 0.018$$; 0.011 = 0.053 × 0.207, $$p \leq 0.010$$; respectively), indicating that BMI may relate to SSD and BULT through ULL (Figure 6B,C). However, total effect (−0.056, −0.007, $p \leq 0.05$, respectively) of BMI towards NLA and ULT consisted of direct effect (−0.081, 0.000, $p \leq 0.05$, respectively) and indirect effect (0.025 = 0.036 × 0.679, −0.008 = 0.041 × (−0.197), $p \leq 0.05$, respectively), indicating that BMI did not directly relate to NLA and ULT, or through ULL (Figure 6A,D). ULL is the only upper lip characteristic independently affected by BMI, thus we consider it a mediator. ## 4. Discussion As the main part of the lower facial soft tissue, lips are vital to the perception of facial aesthetics. One of the primary concerns in orthodontic treatment is a coordinated and beautiful soft tissue profile, in which lips make a major contribution. Body Mass Index (BMI), the most widely used clinical measure of general obesity, has been recently found to affect facial bone and soft tissue structures [15]. Previous studies mainly focused on its relationship with FSTT and considered BMI as one of the key factors affecting FSTT. These findings are mainly applied to the field of forensic science for more detailed facial soft tissue reconstruction and facial recognition [24]. However, a recent study reported an association between BMI with linear lip measurements, but did not examine this in detail [22]. Lips being one of the most important factors affecting facial aesthetics, relevant studies on lip morphology characteristics and their association with BMI are few, and the relationship between the two is unclear [18,19]. Therefore, the study aimed to investigate the relationships between BMI and LMCs. Most existing cephalometric analyses have been derived from orthodontics in Western countries, thus the reference values of cephalometric measurements were mainly standardized according to Caucasians [25,26]. However, there are significant differences in soft tissue characteristics and aesthetic preferences among different ethnic groups [1] and it is necessary to conduct a study with sufficient sample size in a Chinese population to facilitate the diagnosis and treatment planning of orthodontists and contribute to the development of facial reconstruction in the field of forensic science. In our study, the prevalence of different BMI categories and mean values of lip morphology characteristics, including NLA, SSD, BULT, ULT, ULL, SPT, ISD and LLL, were revealed in the Chinese population. Previous studies have shown that facial soft tissue thickness (FSTT) increased with increase in BMI and that larger BMI was associated with increased linear measurements [18,22]. Similarly, we also demonstrated significant differences in lip characteristics across BMI categories and found that lip characteristics were positively correlated with BMI, except for nasolabial angle, which was negatively correlated with BMI. With increasing BMI, facial soft tissue thickness increases and, conceivably, the lip length and thickness measurements also increase, while the nasolabial angle decreases due to the protrusion of the upper lip, as confirmed by previous studies [7]. However, it is interesting that this linear relationship did not hold in obese patients. One possibility is that the sample size of the obese group was too small to detect a true trend. Another reason may be that the effect of increased BMI on soft tissue is limited. A very large BMI can cover all other anatomical factors [27]. In the obese category, changes in soft tissue size have reached their limits. This is similar to previous studies’ speculation that directional stretching of soft tissues is limited and there is mutual compensation [22,28]. Therefore, when BMI increases to a certain level, the increase in soft tissue volume may no longer influence length and thickness and will be compensated by width. Further research using three-dimensional imaging technology is needed to confirm this, as this study only involved cephalometric analysis and did not measure lip width. In addition, we suspected this, because the lip is closely related to the supporting hard tissue structures and is limited by adjacent structures [29], with the upper lip limited by the nose and the lower lip, and the lower lip limited by the upper lip and the chin, thus limiting its size variation in three dimensions. These results indicated that the BMI of patients should be considered in future treatment planning, and individualized treatment planning should be made because different soft tissue characteristics vary among different BMI. In addition, the independent association of LMCs with BMI was assessed in this study, considering various confounders. Multiple variables affecting soft tissue morphology have been identified. Many studies have focused on age, with several studies showing that the upper and lower lips significantly retruded compared to the aesthetic line and became thinner with aging [30,31,32]. Gender is also an important factor in soft tissue morphology, and it has been demonstrated in many different countries and regions that facial soft tissue including the lips of males is thicker than that of females [30,33,34]. Besides, race-related soft tissue differences have also been commonly involved [35,36,37]. Basically, the Negroid population have the thickest and the most protruding lips, followed by the Asian population, with the Caucasian population the thinnest and straightest, and with shorter upper lips. In addition, the incisor position and skeletal patterns are also confounding factors highlighted by numerous studies [6,7,38], which show that upper lip thickness and length are significantly correlated with the protrusion of maxillary incisors, and there are significant differences in the upper lip morphology among different skeletal patterns. After adjusting for confounding variables that may affect the relationship between lip characteristics and BMI, ULL, SPT, ISD and LLL were revealed to be positively correlated with BMI. Previous studies simply explored the relationship between soft tissue thickness and BMI, and concluded that obese subjects have thicker gnathion and pogonion soft tissue thickness. However, these studies did not include sufficient sample size, adequately adjust for confounding factors, or specifically measure and compare lip characteristics among different BMI categories [19,39,40]. In clinical practice, teenage patients, who account for a large proportion of orthodontic treatments, are at the peak of their growth and development, with rapid increases in height and weight and changes in BMI. The results of this study can help doctors predict the possible impact of BMI changes on soft tissue profile in advance. Besides, BMI has been shown to influence skeletal development, with obese/overweight children and adolescents more likely to experience advanced dental and skeletal maturation [41,42,43], thus influencing the timing of intervention and treatment plans. For adult patients, studies have shown that the mandibular cortex was thicker in obese and overweight patients, and periodontal tissue responded differently to orthodontic force [41,44]. Moreover, it has been found that increased BMI may be related to the decrease of orthodontic treatment compliance, which is worthy of attention [14]. However, it was found that NLA, SSD, BULT and ULT were not independently affected by BMI, and mediation analysis found BMI associated with SSD and BULT through ULL, which is an unprecedented discovery. This may indicate that changes in the length of the upper lip are limited by the adjacent structures, so when the length of the upper lip increases to a certain extent, it is compensated for by changes in other dimensions, such as the basic upper lip thickness, which in turn affects the depth of the upper lip groove. However, the result of mediation analysis between the other two upper lip morphology characteristics, NLA and ULT, with BMI, was not significant. We speculate that the sample size is insufficient to find statistical differences or that there are lip characteristics which have not been measured, and further research could be considered to focus on this. To the best of our knowledge, the current study is the first to explore the association of BMI with lip morphology characteristics in a Chinese population, and the non-negligible influence of BMI on lip morphology has been found, which provides a further reference for the diagnosis, personalized treatment planning and subsequent scientific research in orthodontics. Based on previous studies and the large sample size of this study, multiple linear regression was used to adequately adjust for covariates, thus increasing the accuracy and authenticity of the results [45,46]. Furthermore, it should not be ignored that special mediation analysis was used to further explore the lip characteristics not independently affected by BMI, and the mediator ULL was found. The study has several limitations. First, as a cross-sectional study, we could only draw conclusions on whether there was a correlation between BMI and LMCs, rather than a direct causal relationship. Second, due to limitations of the database, we were unable to identify participants younger than 12 y, older than 53 y, with a history of orthodontic treatment or without permanent dentition, hence the conclusions need to be interpreted with caution. Third, covariates were adjusted to control for confounders, but there may still be unmeasured or unknown covariates, such as waistline, body fat percentage and systemic diseases such as diabetes and hypertension, etc. Finally, the sample source of this study is a Chinese population, so the findings should be cautiously generalized to other populations. Future studies can use frontal photographs and three-dimensional facial scanning techniques to focus on more lip features from different perspectives and explore their relationship between BMI. More diverse groups such as children and the elderly should also be considered.
# Spatial Distribution of COVID-19 Hospitalizations and Associated Risk Factors in Health Insurance Data Using Bayesian Spatial Modelling ## Abstract The onset of COVID-19 across the world has elevated interest in geographic information systems (GIS) for pandemic management. In Germany, however, most spatial analyses remain at the relatively coarse level of counties. In this study, we explored the spatial distribution of COVID-19 hospitalizations in health insurance data of the AOK Nordost health insurance. Additionally, we explored sociodemographic and pre-existing medical conditions associated with hospitalizations for COVID-19. Our results clearly show strong spatial dynamics of COVID-19 hospitalizations. The main risk factors for hospitalization were male sex, being unemployed, foreign citizenship, and living in a nursing home. The main pre-existing diseases associated with hospitalization were certain infectious and parasitic diseases, diseases of the blood and blood-forming organs, endocrine, nutritional and metabolic diseases, diseases of the nervous system, diseases of the circulatory system, diseases of the respiratory system, diseases of the genitourinary and symptoms, and signs and findings not classified elsewhere. ## 1. Introduction The COVID-19 pandemic has already and still continues to impact billions of people across the world and has been declared a public health emergency of international concern by the World Health Organization (WHO) [1]. To contain the spread of the virus, lockdowns across the globe were declared, resulting in closure of cities, suspension of schools, and restrictions of international travel, resulting not only in a public health crisis, but also in a humanitarian, economic and social crisis [2,3]. In Germany, the first case was reported in Bavaria at the end of January 2020. At the beginning of March, almost all federal states in Germany reported cases of the disease. The southern counties in Bavaria and Baden-Württemberg in particular were affected by high numbers of cases [4]. From March 16, the first lockdown was imposed: far-reaching exit and contact restrictions applied, which were only gradually lifted again at the end of April. Several studies from German-speaking countries have investigated the spread of COVID-19 infections from a spatiotemporal perspective. The first study [5] dates from May 2020 and is known as the Ischgl study. Based on a spatial diffusion model, correlations between the occurrence of COVID-19 infections in Germany and population mobility could be established for the first time. The vacation resort of Ischgl in Austria was given special importance as a starting point for infection occurrence in Germany, which primarily brought into focus the importance of mobility as a driver of virus spread. Steiger et al., in their study on the determinants of regional infection incidence at the level of districts and district-free cities in the period from 15 February 8 to July 2020, found that increasing temperature and mobility for basic supplies, especially, reduce the incidence of infection, whereas recreational mobility or precipitation can increase the incidence of infection [6]. In their study, Scarpone et al., analysed spatial associations between COVID-19 case rates and spatial characteristics of infrastructure, sociodemographics, and the built environment [7]. In summary, the results showed, among others, an association between built density, place of residence, transportation infrastructure (e.g., access to intensive care units), and sociodemographic factors (e.g., unemployment) as predictors of regional incidence rates in Germany. Overall, it is clear that mobility and sociodemographic circumstances in particular have an important influence on the regional incidence of infection. In addition, it has been shown that density, built-up areas, and even weather influence the frequency of contact. Importantly, the determinants overlap spatially and temporally [8] and also depend on the pandemic phase [9]. For example, in the early pandemic phase until mid-April 2020, a socioeconomic gradient with higher incidence in less deprived regions of *Germany is* evident, but this gradient dissipates or reverses in favour of more deprived regions in the south of the country as the pandemic progresses [10]. This highlights the need to consider spatiotemporal dynamics within the observation period when analysing COVID-19 determinants with infection incidence, as the predictors of incidence rates are spatiotemporally dependent on the pandemic phase. The fast spread of COVID-19 has increased public awareness of the use of geographic information systems (GIS) for pandemic preparedness, resulting in a large number of studies revealing the potential of GIS and spatial statistics—especially cluster detection methods—to detect outbreaks [3,11,12,13]. Likewise, GIS has also been extensively used to identify sociodemographic and environmental characteristics associated with COVID-19, possibly resulting in a better understanding of the population groups most at risk [14,15]. In Germany, most research on the spatial distribution of COVID-19 is restricted to the relatively coarse level of counties [16], masking important variation at the small-area, municipality, or even neighbourhood level, hampering productive outbreak detection and management, despite numerous studies’ having shown the value of microgeographic data on COVID-19 [17,18,19]. Likewise, studies on the spatiotemporal dynamics focus mainly on cluster detection methods, with SaTScan (Software for the spatial, temporal, and space–time scan statistics) being the most widely used statistics software [19,20]. Cluster tests are an important tool here to effectively detect outbreaks. A large number of studies examined sociodemographic risk factors for COVID-19. However, the majority of studies are based on an ecological study design and not at the individual level [14,15]. While these studies have the advantage in that they may represent the total population, they suffer from ecological fallacy, meaning that the results of a study design based on aggregated data do not necessarily represent associations at the individual level. In contrast, studies based on individual data often suffer from small population samples (e.g., a hospital) [21,22]. In this context, health insurance data might not only provide fairly detailed insights into the spatial and spatiotemporal distribution, since these data can be analysed at microgeographic level, but also provide a rich and detailed data source on individual-level sociodemographic information and pre-existing medical conditions. The aim of this research is therefore to (i) provide insight into the spatial distribution of COVID-19 hospitalizations based on the data of northeast Germany’s largest statutory health insurance provider and (ii) analyse sociodemographic and medical conditions associated with hospitalization. ## 2.1. Data AOK *Nordost is* the largest statutory health insurance provider in northeast Germany and covers approximately $25\%$ of the population in the three federal states of Berlin, Brandenburg, and Mecklenburg-Western-Pomerania. For this study, we used all 1.7 million insurants that were insured in 2021. We defined COVID-19 hospitalization as an insurant having a positive PCR test in a hospital, coded with the international classification of disease (ICD-10) U07.1!. To ensure that we captured only hospitalizations where COVID-19 is likely the primary reason for hospitalization, we additionally restricted our data source to include only individuals that have in addition to U07.1! a diagnosis for viral pneumonia or respiratory syndrome as defined by the ICD-10 codes J12.8, J12.9, J20.8, J20.9, J21.8, J21.9, J22.-. In total, 8402 insurants were hospitalized due to COVID-19. For the analysis of possible risk factors for COVID-19 hospitalizations, we included sex, age, being unemployed at 1 July 2021, and foreign citizenship. To account for underlying chronic diseases, we included information on whether the insurant had a confirmed diagnosis of diseases, aggregated to ICD-10 chapters to keep the number of possible diagnoses per insurant at a reasonable number. The included ICD-10 chapters consist of I: *Certain infectious* and parasitic diseases, II: Neoplasms, III: Diseases of the blood and blood-forming organs and certain disorders involving the immune mechanism, IV: Endocrine, nutritional and metabolic diseases, V: Mental and behavioural disorders, VI: Diseases of the nervous system, VII: Diseases of the eye and adnexa, VIII: Diseases of the ear and mastoid process, IX: Diseases of the circulatory system, X: Diseases of the respiratory system, XI: Diseases of the digestive system, XII: Diseases of the skin and subcutaneous tissue, XIII: Diseases of the musculoskeletal system and connective tissue, XIV: Diseases of the genitourinary system, XV: Pregnancy, childbirth and the puerperium, XVI: Certain conditions originating in the perinatal period, XVII: Congenital malformations, deformations and chromosomal abnormalities, and XVIII: Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified. At the aggregated level, we used a commercial dataset from WIgeoGIS of the so-called Geomarkets. A *Geomarket is* an administrative unit of approximately 300 households and contains valuable information on demographics, socioeconomic information, and household composition of the respective population. This data source is more useful than free official administrative data, which are only available at the level of municipalities, where large cities such as Germany’s capital, Berlin, represent only one single municipality. In contrast, Geomarkets allow an analysis of intra-urban differences. In total, northeast Germany consists of approximately 16,400 Geomarkets. The insurants were aggregated to the level of Geomarkets based on their respective address coordinates. Several studies demonstrated that area deprivation has a significant impact on COVID-19 [23,24]. We therefore calculated a deprivation index based on the following variables: unemployment rate, proportion of employed persons at the place of residence, purchasing power, persons with high school degrees, and proportion of persons without formal education. The domains of employment, income, and education were weighted equally. The resulting index values range from 1 (least deprived) to 100 (most deprived). The methodology is similar to the calculation of the German index of multiple deprivation by Werner Maier [25]. ## 2.2. Statistical Analysis To visualize the cumulative one-year COVID-19 incidence, we aggregated the insurants to the level of the 16,400 Geomarkets based on their address coordinates. To be able to visualize regional differences at this fine level, we used the Besag–York–Mollie (BYM) model. The BYM model has been extensively used to display disease rates at fine spatial resolution [26]. The input for this model consisted of the sex- and age-adjusted number of hospitalized COVID-19 patients and the expected cases. The basic assumption is that the COVID-19 hospitalizations follow a Poisson distribution, where the expected cases are the global average of the sum of observed cases divided by the global sum of insurants, multiplied by the insurants of each Geomarket. The model adjusts for the uneven distribution of the AOK Nordost insurants by weighting the incidence of a Geomarket by the average of the neighbouring Geomarkets and additionally shrinking the rate towards the global mean. This is performed by providing a neighbourhood matrix of the Geomarkets. We chose queen contiguity, where all Geomarkets are defined as neighbours if they share a common edge or border [27]. The model then smooths out the noise associated with small numbers of COVID-19 hospitalization cases as a function of the data in surrounding areas. A more detailed statistical explanation is given by Lawson et al., 2000 [28]. Additionally, we created a continuous surface to preserve insurant confidentiality, by applying an interpolation method called the stochastic partial differential equation (SPDE) approach. This approach has also been used to create small-area continuous surfaces for several diseases such as HIV prevalence in sub-Saharan Africa [29] or disease management enrolment in Germany [30]. The calculation of the BYM model and the SPDE approach was carried out using the integrated nested Laplace approximation available in the INLA package for R version 4 [31], and the results were then displayed with the R package ggplot2 [32]. ## 2.3. Regression Analysis To calculate possible risk factors for COVID-19 hospitalizations, we used a Bayesian global logistic regression model, using the BYM model to account for spatial relationships in the form of structured and unstructured effects at the level of the 16,400 Geomarkets [30,31]. At the individual level, we used sex, age, foreign citizenship, being unemployed at 1 July 2021, and being in a nursing home. At the aggregated level, we used our deprivation index and average household size. We transformed the deprivation index into quintiles and included the index as categories, where the first quintile—the lowest level of deprivation—is the reference category. The response variable was coded as a binary variable (the insurant was hospitalized for COVID-19 vs. was not hospitalized). The regression coefficients were then exponentiated to allow an interpretation as odds ratios, which are easier to interpret than the plain regression coefficients [33,34]. To check for multicollinearity among the explanatory variables, we started with a non-spatial global regression model and checked for multicollinearity using the HH package in R. The HH package assigns a variance inflation factor (VIF) to all explanatory variables within the regression model. A VIF > 5 indicates the presence of multicollinearity and warrants the removal of one or more of the explanatory variables [35]. ## 3.1. Spatial Distribution of Accumulated COVID-19 Incidence 2021 The accumulated one-year incidence of COVID-19 hospitalizations ranged between 0 and 1422 hospitalized insurants per 100,000 insurants. The highest incidence could be observed in the south of Brandenburg in the counties of Elbe-Elster and Spree-Neiße, but also in smaller spots scattered across the whole study area (Figure 1). The lowest incidence could be observed on the coastline of Mecklenburg-Western-Pomerania, including the city of Rostock. ## 3.2. Risk Factors for COVID-19 Hospitalizations Male insurants had a $67.7\%$ higher risk of hospitalizations than women (Table 1). With every year of age, the risk of hospitalization increased by $3.9\%$. Insurants with foreign citizenship had a $150.2\%$ higher risk than insurants with German citizenship. Being currently unemployed increased the risk by $29.6\%$. Insurants living in a nursing home had a $75.9\%$ higher risk than insurants not living in a nursing home. Pre-existing chronic conditions significantly associated with hospitalizations were certain infectious and parasitic diseases, where insurants with this disease group had a $23.6\%$ higher risk. Diseases of the blood and blood-forming organs increased the risk by $29.3\%$. Endocrine, nutritional and metabolic diseases increased the risk by $35.5\%$. Diseases of the nervous system increased the risk by $28.4\%$. Diseases of the circulatory system increased the risk by $21.4\%$. Diseases of the respiratory system increased the risk by $23.2\%$. Diseases of the genitourinary system increased the risk by $24.5\%$. Symptoms, signs and findings not elsewhere classified increased the risk by $16.2\%$. Average household size did not have a significant impact on the risk of hospitalization. The effect of deprivation was not linear. Only the second-least-deprived quintile and the medium-deprived quintile had a significant effect on the risk of hospitalization: Insurants living in second-least-deprived Geomarkets had an $11\%$ higher risk than insurants living in the least deprived quintile, and insurants living in the medium-deprived quintile had an $8\%$ higher risk than insurants living in the least deprived quintile. ## 4. Discussion This is likely one of the most spatially detailed research studies in Germany based on health insurance data on COVID-19 hospitalizations. We found strong spatial differences. The main sociodemographic risk factors for COVID-19 hospitalizations were male sex, higher age, being unemployed, and living in a nursing home. Pre-existing conditions associated with hospitalization were certain infectious and parasitic diseases, diseases of the blood and blood-forming organs, endocrine, nutritional and metabolic diseases, diseases of the nervous system, diseases of the circulatory system, diseases of the respiratory system, diseases of the genitourinary system, and symptoms, signs and findings not elsewhere classified. Our results clearly demonstrate the benefits of small-area data on COVID-19 hospitalizations. We aggregated the insurants for the accumulated one-year incidence of 2021 to the level of the 16,400 Geomarkets of our study area, which is more detailed by far than the counties, for which official data of the Robert Koch *Institute is* reported [16,36]. Individual lower socioeconomic status was a risk factor for hospitalization. This is in line with other studies, not only in the German context [37], but in international studies [38]. Our study examined both lower socioeconomic status both at the individual level and at the aggregated level in the form of deprivation at the place of residence at a very detailed spatial resolution. However, we found that mainly individual-level socioeconomic status is a risk factor, but not necessarily living in the least deprived areas. Similarly, our results confirm that foreign citizenship seems to be a risk factor for more severe consequences from a COVID-19 infection. This has been observed in Germany [39] as well as in other high-income countries [40]. We identified insurants living in nursing homes as another sociodemographic high-risk group. This is not surprising, as persons living in nursing homes generally are fairly old and have a higher number of chronic diseases than average. Logically, these findings are in line with other studies in Germany [41]. While the international literature suggests that area deprivation has an important effect on COVID-19 hospitalization risk [42], we found that insurants living in the second-least and medium-deprived Geomarkets had a higher risk than in the least-deprived Geomarkets. Since our study is based at the microgeographic level of the Geomarkets, this might further reflect the need for more spatially detailed research on COVID-19, as the problem of ecological fallacy grows with the size of the geographical unit for which the data are available [43]. Based on our findings, we might conclude that, at least for our subsample of the population, individual-level socioeconomic status might be more relevant than the place where the insurants live. Since our study included both individual-level socioeconomic status and area-level socioeconomic status, our findings add more depth than previous studies, which mostly included only one measure of socioeconomic status, but seldom both. ## 5. Limitations Our study has several limitations:The database of AOK Nordost does not contain any information on vaccination status of its insurants. Logically, the positive effect of vaccination could not be quantified. It would have been interesting to quantify the effect of vaccination with regards to date of vaccination, number of doses, and pre-existing conditions on COVID-19 hospitalizations. Such an approach could help to determine in which groups with specific underlying medical conditions vaccination is more effective than in others. Although as cases we selected only those persons who have a laboratory-confirmed diagnosis of COVID-19 as the primary code in addition to a secondary diagnosis of viral pneumonia or respiratory syndrome, it is not clear how high the quality of diagnosis actually is, e.g., COVID-19 being detected as a by-product of another reason for hospital admission. AOK *Nordost is* northeast Germany‘s largest health insurance provider, covering appr. $25\%$ of the inhabitants. However, large sociodemographic differences of members of different health insurance providers exist, with the AOK Nordost having a higher proportion of elderly and chronically ill persons. As a result, our analysis may not be representative of the whole population. While the prevalence rates may be slightly higher than for all statutory health insurants, the regional distribution of diseases is generally comparable to those of all statutory health insurants [26,44,45,46]. As a result, the general distribution of COVID-19 hospitalizations may be slightly higher than for all statutory health insurants, but the regional distribution is expected to still be comparable. Additionally, with tests for COVID-19 in 2020 and 2021 having been mostly performed at testing sites and not within ambulatory care, we could not see whether a COVID-19 diagnosis was existent before the insurant was hospitalized. This might influence the validity of our results, since our database contains only hospital diagnoses for COVID-19 for those years. ## 6. Conclusions This is likely one of the most spatially detailed studies on the spatial distribution of COVID-19 hospitalizations and its associated risk factors. We found important regional variations at very fine scales, clearly demonstrating the need for more fine-grained spatial data on possible future pandemics. Our results clearly identified persons with lower socioeconomic status and persons living in nursing homes as important sociodemographic risk groups. Additionally, we identified several disease groups as risk factors for hospitalizations. COVID-19 hospitalizations and associated risk factors have significant policy implications that must be taken into consideration when creating and implementing mitigation and containment strategies. Age, underlying health conditions, and socio-economic status have been identified as key risk factors for severe illness and hospitalization from COVID-19. Therefore, policies that target vulnerable populations, such as elderly individuals and those with underlying health conditions, are crucial in reducing hospitalizations and deaths from the virus. Additionally, policies that address socio-economic disparities, such as increasing access to healthcare and providing financial support for those who have been impacted by the pandemic, can also have a meaningful impact on reducing hospitalizations. These results might serve as a foundation for better outbreak and containment strategies.
# Comparison of Frailty Assessment Tools for Older Thai Individuals at the Out-Patient Clinic of the Family Medicine Department ## Abstract This study evaluated the validity of the screening tools used to evaluate the frailty status of older Thai people. A cross-sectional study of 251 patients aged 60 years or more in an out-patient department was conducted using the Frailty Assessment Tool of the Thai Ministry of Public Health (FATMPH) and the Frail Non-Disabled (FiND) questionnaire, and the results were compared with Fried’s Frailty Phenotype (FFP). The validity of the data acquired using each method was evaluated by examining their sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and Cohen’s kappa coefficient. Most of the participants were female ($60.96\%$), and most were between 60 and 69 years old ($65.34\%$). The measured prevalences of frailty were $8.37\%$, $17.53\%$, and $3.98\%$ using FFP, FATMPH, and FiND tools, respectively. FATMP had a sensitivity of $57.14\%$, a specificity of $86.09\%$, a PPV of $27.27\%$, and an NPV of $95.65\%$. FiND had a sensitivity of $19.05\%$, a specificity of $97.39\%$, a PPV of $40.00\%$, and an NPV of $92.94\%$. The results of the Cohen’s kappa comparison of these two tools and FFP were 0.298 for FATMPH and 0.147 for FiND. The predictive values of both FATMPH and FiND were insufficient for assessing frailty in a clinical setting. Additional research on other frailty tools is necessary to improve the accuracy of frailty screening in the older population of Thailand. ## 1. Introduction Frailty is defined as a reduction in the ability to cope with everyday or acute stressors, particularly among older adults [1]. Frailty results in an increased vulnerability brought about by age-associated declines in physiological reserves and functioning across multiple organ systems [1]. The consequences of this condition heighten an individual’s susceptibility to increased dependency and vulnerability, as well as to an increased risk of death [1,2]. The health care system is affected by increases in health care needs, admissions to hospital, and admissions to long-term care. However, frailty is a dynamic process which can emerge from pre-frail or robust statuses [3]. Validated assessment tools and appropriate interventions are important to reduce morbidity and mortality. A systematic review and meta-analysis of a survey of the models used to evaluate frailty among ≥ 50-year-olds in 62 countries found that $12\%$ of prevalence used physical frailty models and $24\%$ used deficit accumulation models. The prevalences of the consideration of pre-frailty were $46\%$ and $49\%$ for the physical frailty models and the deficit accumulation models, respectively [4]. In terms of geographical location, using physical frailty models, the highest prevalence of physical frailty was found in Africa ($22\%$) and the lowest prevalence was in Europe ($8\%$), while the pre-frailty prevalence was highest in the Americas ($50\%$) and lowest in Europe ($42\%$). However, using deficit accumulation models, the prevalence of frailty was found to be highest in Oceania ($31\%$) and lowest in Europe ($22\%$), while pre-frailty prevalence was highest in Oceania ($51\%$) and lowest in Europe and Asia ($49\%$). The population-level frailty prevalence among community-dwelling adults varied by age, gender, and frailty classification [4]. Several studies have reported that frailty is related to a variety of negative health outcomes and diseases. In 2013, cognitive frailty was described as a group of heterogeneous clinical symptoms based on the presence of both physical frailty and cognitive impairment, excluding consistent Alzheimer’s disease or other dementias. The prevalence of cognitive frailty among community-dwelling older adults was reported to be $9\%$ in a systematic review and meta-analysis [5]. Similarly, the prevalences of frailty and pre-frailty were found to be $20.1\%$ and $49.1\%$, respectively, in a systematic review and meta-analysis study of community-dwelling older adults with diabetes. Older adults with diabetes were more susceptible to being frail than those without diabetes [6]. Additional factors were found to have an influence on frailty; for example, fruit and vegetable consumption was associated with a lower risk of frailty [7]. There are many measurement tools available which can provide frailty scores when used to screen for or assess the degree of frailty; however, no single score metric is considered the gold standard [2,8]. It has been recommended that geriatricians in the Asia-Pacific region use a validated measurement tool to identify frailty [2]. There are three major approaches used, i.e., the physical frailty phenotype model of Fried et al. and its rapid screening tool, FRAIL; the deficit accumulation model of Rockwood and Mitnitski, which captures multimorbidity; and mixed physical and psychosocial models, such as the Tilburg Frailty Indicator [9] and the Edmonton Frailty Scale [10]. Another approach by Aguayo GA et al. [ 8] consists of the use of four models, including a phenotype of the frailty model, a multidimensional model, an accumulation of deficits model, and a disability model. The most commonly used method in the literature is the physical frailty phenotype [11]. The phenotype diagnosis is based on three of the following five criteria: weight loss, exhaustion, physical inactivity, slow walking speed, and weak grip strength [12]. The present study reviews five phenotypic criteria that have been measured in different ways across various studies which could potentially affect the estimates of the prevalence of frailty and the predictive ability of the aforementioned phenotype, potentially leading to different classifications and results [11]. Kutner and Zhang [13] commented on the replacement of the performance-based measures (i.e., grip strength and walking speed) in the original frailty phenotype definition with self-reported items. In Thailand, a study by Boribun N. et. al. [ 14] found that the prevalence of frailty in Thai community-dwelling older adults was $24.6\%$, based on the Frail Non-Disabled (FiND) questionnaire. A 2020 study by Sukkriang and Punsawad [15], which used various frailty assessment tools, found that the prevalence of frailty of older individuals in Thai communities was $11.7\%$, using Fried’s Frailty Phenotype (Cardiovascular Heart Study) criteria, and studied the validity of various frailty assessment tools. The Clinical Frailty Scale (CFS) used in the same study had a sensitivity of $56\%$ and a specificity of $98.41\%$; the simple FRAIL questionnaire had a sensitivity of $88\%$ and a specificity of $85.71\%$; the PRISMA-7 questionnaire sensitivity was $76\%$; and the specificity was $86.24\%$. The Timed Up and Go (TUG) test had a sensitivity of $72\%$ and a specificity of $82.54\%$. The Gerontopole frailty screening tool (GFST) sensitivity was $88\%$ and the specificity was $83.56\%$. The study by Sriwong et al. [ 2022] [16] developed a Thai version of the Simple Frailty Questionnaire (T-FRAIL) and modified it to improve its diagnostic properties in the preoperative setting. Their study found that the incidence of frailty diagnosed using the Thai Frailty Index was $40.0\%$. The identification of frailty using a score of two points or more provided the best Youden index, at 63.1, with a sensitivity of $77.5\%$ ($95\%$ CI 69.0–84.6) and a specificity of $85.6\%$ ($95\%$ CI 79.6–90.3). There is currently a need for simple, valid, accurate, and reliable methods and tools for detecting frailty which are appropriate for the Thai population. Our team works in an academic hospital and has developed evidence data in our clinic in the hospital. Therefore, the present study was conducted in this clinic. This study compared selected frailty assessment tools, including Fried’s Frailty phenotype (FFP), which is the most commonly used assessment tool used for reference; the Frailty Assessment Tool of the Thai Ministry of Public Health (FATMPH), which is recommended in the Thai check-up manual but lacks published validation; and the FiND questionnaire, which is used in communities but, as yet, there is no evidence of its use at the Out-Patient Department (OPD) of Maharaj Nakorn Chiang Mai Hospital (a university-level hospital). ## 2.1. Samples This cross-sectional study included 251 older patients (age 60 years or older) who came to the OPD of the Family Medicine Department, Maharaj Nakorn Chiang Mai Hospital, Faculty of Medicine, Chiang Mai University, during the period of December 2016–March 2017. The patients signed a consent form declaring their agreement to participate in this research. This study was approved by the Research Ethics Committee of the Faculty of Medicine of Chiang Mai University (no. $\frac{380}{2016}$). The inclusion criteria for participants were: [1] Thais 60 years or older and who had been seen at the OPD for more than 1 year, [2] the ability to communicate orally in Thai and read the Thai language, [3] the ability to walk by themselves or with walking aids. The exclusion criteria were: [1] being bed ridden, [2] being handicapped in both hands, [3] currently having a serious illness, and [4] having impaired cognition. The sample size was calculated to be 230 using the following formula:n = Z2α/2 × Se(1 − Se)/d2 × Prev where n = sample size, Se = sensitivity (0.9), Prev = prevalence (0.15) [17], d = precision of the estimate (1.0), and alpha = 0.1. ## 2.2.1. Fried’s Frailty Phenotype The five criteria of Fried’s Frailty Phenotype (FFP) assessment were used as the reference assessment tool in this study, following Fried et al. [ 12], with slight modification. These criteria were:[1]Weight loss. My weight has decreased at least 4.5 kg in the past year or I have had an unintentional weight loss of at least $5\%$ of my previous year’s body weight (no = 0, yes = 1).[2]Exhaustion. Self-reported results of the Center for Epidemiologic Studies Depression scale (CES–D). Two statements were provided: (2.1) I felt that everything I did was an effort and (2.2) I could not get going. The question is then asked, “How often in the last week did you feel this way?” The alternative answers are: 0 = rarely or none of the time (<1 day), 1 = some or a little of the time (1–2 days), 2 = a moderate amount of the time (3–4 days), or 3 = most of the time. Answers of “2” or “3” to either of these questions were categorized as frail by the exhaustion criterion (no = 0, yes = 1).[3]Slowness. My walking speed is $20\%$ below baseline (adjusted for gender and height) (no = 0, yes = 1).[4]Weakness. Grip strength is $20\%$ below baseline (adjust for gender and body mass index) (no = 0, yes = 1).[5]Low activity was evaluated with the following question: How often do you engage in activities that require a low or moderate amount of energy such as gardening, cleaning the car, or walking? ( more than once a week = 1, once a week = 2, one to three times a month = 3 and hardly ever or never = 4) [18]. A combined FFP score of 0 was considered a “non-frail” phenotype; a score of 1 or 2 was considered a “pre-frail” phenotype; and a score of 3 or more was considered a “frail” phenotype. ## 2.2.2. Frailty Assessment Tool of the Thai Ministry of Public Health The Frailty Assessment Tool of the Thai Ministry of Public Health (FATMPH) is a modification of Fried’s Frailty Phenotype, and is included in the Elderly Screening/Assessment Manual [2015] [19]. The assessment tool has 5 criteria: four questions are self-reports and one is based on measurement by medical staff:[1]In the past year, has your weight has decreased by more than 4.5 kg? ( no = 0, yes = 1)[2]Do you feel tired all the time? ( no = 0, yes = 1)[3]Are you unable to walk alone and need someone for support? ( no = 0, yes = 1)[4]The participants walked in a straight line for a distance of 4.5 m. Time was measured from when they started walking (Time < 7 $s = 0$, time ≥ 7 s or could not walk = 1).[5]The participant had an obvious weakness in their hands, arms, and legs (no = 0, yes = 1). A FATMPH score of 0 was considered a phenotype of “non-frail”; a score of 1 or 2 was considered a phenotype of “pre-frail”; and a score of 3 or more was consider a phenotype of “frail”. ## 2.2.3. Frail Non-Disabled (FiND) Questionnaire The Frail Non-Disabled (FiND) questionnaire is designed to differentiate between frailty and disability. FiND was used for community-dwelling older Thai adults by Boribun N et. al. [ 14]. The content validity index (CVI) was 0.8 and Cronbach’s alpha was 0.89 [13]. The FiND questionnaire consists of 5 questions: Do you have any difficulty walking 400 m? ( no or some difficulty = 0, much difficulty or unable = 1)Do you have any difficulty climbing up a flight of stairs? ( no or some difficulty = 0, much difficulty or unable = 1)During the last year, have you involuntarily lost more than 4.5 kg? ( no = 0, yes = 1)How often in the last week did you feel that everything you did was an effort or that you could not get going? ( 2 times or less = 0, 3 or more times = 1)*What is* your level of physical activity? ( at least 2–4 h per week = 0, mainly sedentary = 1) A combined score of A + B + C + D + $E = 0$ was considered as “non-frail”; A + $B = 0$ and C + D + E ≥ 1 was considered as “frail”; and A + B ≥ 1 was considered as “disabled”. ## 2.3. Data Collection Data were collected using questionnaires and assessed using various tools. *The* general characteristics recorded included age, sex, religion, education, income, source of payment of medical expenses, history of family disease, present weight, weight one year ago, height, and body mass index. All participants were assessed using the Thai-language version of FATMPH, FFP, and FiND. The inter-rater reliability was 1.0 between researchers and assistants. ## 2.4. Statistical Analysis The data were analyzed using Stata 12.0 and are presented as frequency, percentage, mean, and standard deviation (SD). The frailty assessment tools were analyzed for their sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV); Cohen’s kappa was used to measure the reliability of these assessment tools. ## 2.5. Evaluation Consequence All frail participants who were involved in any of the study of the assessment tools were advised to undergo comprehensive geriatric assessment. The appropriate interventions were then provided to these individuals. ## 3. Results The demographic characteristics of the 251 older participants from the OPD are shown in Table 1. Most were female and ranged in age from 60 to 69. The majority of the participants were married or living with a partner, had lower than a high school education, and were Buddhist. Half the participants were government officials. Most participants had an income of more than 10,000 baht per month. Their major source of income was from pensions, which provided an adequate income. The health status of the participants is shown in Table 2. Several medical conditions were identified among the participants. The most prevalent was hypertension, followed (in declining order of incidence) by dyslipidemia, diabetes mellitus, hyperuricemia, glaucoma or cataracts, chronic kidney disease, benign prostatic hypertrophy, coronary artery disease, cerebrovascular disease, and malignancy, followed by others. In this study, frailty status was evaluated using frailty assessment tools including FFP, FATMPH, and FiND. The frail and non-frail phenotypes were defined based on the combined results of all the assessment tools. The study found that the overall prevalence of frailty was $8.37\%$ based on FFP, most of whom were female ($90.47\%$). The frailty phenotype prevalence determined using FATMPH was $17.53\%$ (female = $65.91\%$); using FiND, the frailty phenotype prevalence determined was $3.98\%$ (female = $80.00\%$) (Table 3 and Table 4). The sensitivity, specificity, positive predictive value, and negative predictive value of the FATMPH and FiND tools were analyzed and compared with the standard FFP tool. As shown in Table 5, FATMHP had a sensitivity of $57.14\%$, a specificity of $86.09\%$, a positive predictive value (PPV) of $27.27\%$, and a negative predictive value (NPV) of $95.65\%$. FiND had a sensitivity of $19.05\%$, a specificity of $97.39\%$, a PPV of $40.00\%$ and an NPV of $92.94\%$. The comparison of FATMPH and FiND with FFP found the *Cohen kappa* statistics to be 0.298 for FATMPH and 0.147 for FiND. ## 4. Discussion Fried’s Frailty Phenotype (FFP) is a well-known and regularly utilized tool for identifying frailty in older individuals [20]. In Thailand, FATMPH was developed as a frailty assessment tool based on FFP. Even though the *Fried criteria* were not initially intended to be used as a self-reported questionnaire, researchers now usually employ modified questionnaires based on this frailty phenotype [21,22]. The Frail Non-Disabled (FiND) questionnaire, a self-administered frailty screening instrument designed to differentiate frailty from disability, was developed as a screening tool [23]. We focused on the comparison of both FATMPH and FiND with FFP, which is currently used to assess older patients at the OPD of the Family Medicine Department of the Maharaj Nakorn Chiang Mai Hospital Faculty of Medicine. Most of the participants had a chronic disease ($92.43\%$), most frequently hypertension ($65.75\%$). The prevalence of frailty in this study was $8.37\%$ using FFP, which is lower than the prevalence of frailty among community-based elderly people ($9.9\%$) [24]. Differences in frailty prevalence were due at least in part to differences in the assessment tools used, as well as the different geographical locations covered in this study. Frailty prevalence increased with age and was higher for females than males [3]. The relatively low prevalence of frailty in the study may be due to the fact that most of the participants were in the younger group of the elderly participants (60–69 years, $65.34\%$). A screening test is defined as a medical test or procedure performed on members (subjects) of a defined asymptomatic population or population subgroup to assess the likelihood of their members having a particular disease or condition [25]. A screening test has only two possible outcomes: positive, suggesting that the subject has the disease or condition; or negative, suggesting that the subject does not have the disease or condition [26]. In prior research, a Korean version of the FRAIL scale (K-FRAIL) was found to be consistent with the multidimensional frailty index and to be a concise tool for screening for frailty in a clinical setting in Korea [24]. In Thailand, many frailty assessment tools have been established for use both for community-dwelling individuals [14,27,28] and in hospitals [16,29]. There have, however, been few studies in Thailand that have included a comparison and validation of the frailty assessment tools used for older Thai adults in order to evaluate their diagnostic efficacy. A previous comparative study of the Thai version of the Simple Frailty Questionnaire (T-FRAIL) and the Thai Frailty Index (TFI) found that T-FRAIL was valid and reliable for frailty detection in elderly patients at a surgery out-patient clinic [16]. Another study of community-dwelling elderly compared several screening tests, including CFS, the simple FRAIL questionnaire, the PRISMA-7 questionnaire, the TUG, and the GFST with Fried’s Frailty Phenotype method. That study found the simple FRAIL questionnaire and the GFST were the most appropriate tests for screening frailty due to their high sensitivity [15]. The present study is the first study to compare the use of FATMPH and FiND with FFP regarding patients in an OPD for older Thai adults. The comparison of FATMPH and FiND found that the sensitivity of FATMPH ($57.14\%$) was higher than that of FiND ($19.05\%$), but that the specificity of FATMPH ($86.09\%$) was lower than that of FiND ($97.39\%$). FATMPH and FiND were both had a lower sensitivity than CFS ($56\%$), the simple FRAIL questionnaire ($88\%$), the PRISMA-7 questionnaire ($76\%$), the TUG ($72\%$), and the GFST ($88\%$), as reported in the study by Sukkriang and Punsawad [15], as well as the modified T-Frails, including T-Frail M1 ($83.3\%$) and T-Frail M2 ($85.8\%$), as reported in the study by Sriwong [16]. However, the categorizations of FiND (non-frail, frail, and disabled) are different from that of both FATMPH and FFP (non-frail, pre-frail, and frail), which could affect the sensitivity of the tests and which might be a reason that FiND had the lowest sensitivity in the present study. FATMPH had a higher sensitivity than FiND because it was modified from FFP, but its sensitivity as a screening tool remains poor. In addition, FATMPH and FiND both had high specificity, similar to other tools used in previous studies [15,16]. Most of the screening tools had a specificity of higher than $85\%$: CSF, at $98.41\%$, as found in a previous study [15]; and FiND, at $97.39\%$, as found in the present study. The sensitivity of both FATMPH and FiND were lower than $85\%$, suggesting that neither is an adequate screening tool [30], while the high specificities of both CSF and FiND suggest they are appropriate for confirming the absence of the condition. FiND is a self-assessment questionnaire suitable for use for individuals in communities, as well as in primary care, whereas FFP is appropriate in primary care and acute care for both individuals in communities and in clinical settings, although the assessment time of FFP is longer than that of FiND [31]. The final judgement of whether or not these methods are appropriate will depend on the context. If the score is used as part of a sequence of screening steps, sensitivity is likely to be more important than specificity, while if the score is used to guide treatment initiation, specificity is equally important [32]. The reliability of FATMPH and FiND were compared with FFP and evaluated using Cohen’s kappa statistic. The kappa values of FATMPH and FiND were 0.289 ($95\%$ CI = 0.132–0.445) and 0.147 ($95\%$ CI = 0.004–0.241), respectively. The levels of agreement of these values were fair (0.21 ≤ K ≤ 0.40) and slight (0.00 ≤ K ≤ 0.20) [33], respectively. Additionally, in a research context, this measure depends on the prevalence of the condition (with a very low prevalence, κ will be very low, even with high agreement between the raters) [32]. FATMPH’s kappa agreement level was higher than FiND because FATMPH was modified from FFP. Aguayo GA et al. [ 8], in a study of the agreement between 35 published frailty scores in the general population, found a very wide range of agreement (Cohen’s kappa = 0.10–0.83). The frailty phenotype properties were impacted by the modified frailty phenotype criteria [11]. The prevalence of frailty was $31.2\%$ for modified self-reported walking, $33.6\%$ for modified self-reported strength, and $31.4\%$ for modified self-reported walking and strength [11]. The agreement with the primary phenotype was 0.651 for modified self-reported walking, 0.913 for modified self-reported strength, and 0.441 for modified self-report walking and strength [11]. FATMPH had a lower agreement (0.268) than that of the Modified Frailty Phenotype. We think that the physical inactivity criteria of FATMPH, i.e., the “Can you walk by yourself or do you need someone help you? ( no = 0, yes = 1)” should be re-evaluated, as it appears to be very similar to the walk speed criteria (4.5 m walk time; <7 $s = 0$, ≥7 s or cannot walk = 1). FFP has two measurements (grip strength and walking speed), but FATMPH uses only walking speed and includes fewer detailed questions. Frailty scores show marked heterogeneity because they are based on different concepts of frailty and research results based on different frailty scores cannot be compared or pooled [8]. A limitation of our study is that it was not representative of all community-dwelling older Thais because the participants were all older patients at the OPD of an academic hospital (Maharaj Nakorn Chiang Mai Hospital) and most were urban residents receiving regular government welfare payments. Further study of validated frailty assessment tools such as multicenter studies, as well as other assessment tools, are necessary to ensure their suitability for the Thai population context. ## 5. Conclusions Our academic hospital-based study using the Thai-language version of the Frailty Assessment Tool of the Thai Ministry of Public Health (FATMPH) and the FiND questionnaire found that both have only a fair to slight agreement with Fried’s Frailty Phenotype (FFP). Additionally, their predictive power is low and, thus, insufficient for frailty detection in a clinical setting. Further multicenter study of these and other assessment tools is needed to improve frailty screening in older Thai populations.
# A Comparative Study of Periodontal Health Status between International and Domestic University Students in Japan ## Abstract Background: In our previous study, international university students showed a significantly higher dental caries morbidity rate than domestic students. On the other hand, the periodontal health status of international university students has not been clarified yet. In this study, we compared the periodontal health status of international and domestic university students in Japan. Methods: We conducted a retrospective review of the clinical data of the university students that visited a dental clinic in the division for health service promotion at a university in Tokyo for screening between April 2017 and March 2019. Bleeding on probing (BOP), calculus deposition and probing pocket depth (PPD) were investigated. Results: The records of 231 university students (79 international and 152 domestic university students) were analyzed; $84.8\%$ of international students were from Asian countries ($$n = 67$$). The international university students showed a higher percentage of BOP than domestic students ($49.4\%$ and $34.2\%$, respectively: $p \leq 0.05$) and they showed more extensive calculus deposition (calculus grading score [CGS]) than domestic university students (1.68 and 1.43, respectively: $p \leq 0.01$), despite no significant difference in PPD. Conclusions: The current study shows that international university students have poorer periodontal health than domestic students in Japan, even though the result might include many uncertainties and possible biases. To prevent severe periodontitis in the future, regular checkups and thorough oral health care are essential for the university students, especially those from foreign countries. ## 1. Introduction Periodontal diseases, along with dental caries, are an important public health problem in terms of their high prevalence, affecting approximately $90\%$ of the world’s population [1,2,3]. Periodontal disease is classified into gingivitis and periodontitis. Gingivitis, the mildest form of periodontal disease, is caused by a bacterial biofilm that accumulates on teeth adjacent to the gingiva. Gingivitis does not affect the supporting structures of the teeth and is reversible. On the other hand, periodontitis, the advanced stage of periodontal disease, causes loss of connective tissue and bone support and is the leading cause of tooth loss in adults. In addition to pathogenic microorganisms in biofilms, genetic and environmental factors such as smoking are known to contribute to the cause of these diseases [3]. Severe periodontitis is reported to have the sixth highest prevalence in the world ($11\%$) [4]. Although dental caries used to be the leading cause of tooth loss in Japan [5,6], now periodontal disease has overtaken dental caries as the leading cause of tooth loss. Specifically, $30.2\%$ of men and $29.0\%$ of women lose their teeth due to dental caries, and $40.4\%$ of men and $34.9\%$ of women lose their teeth due to periodontal disease [7]. Unlike dental caries, periodontal disease often does not cause severe pain, thus regular checkups by dentists are essential. Furthermore, it has been shown that periodontal disease not only causes tooth loss, but also affects overall health [8]. Various diseases, including respiratory diseases [9,10], cardiovascular diseases [11,12,13], rheumatoid arthritis [14], diabetes [15], and others [16,17,18,19], have been reported to be associated with periodontal disease. Although the periodontal disease is more prevalent in middle-aged and older adult, more than one-third of university students aged less than 20 years are already aware of gum bleeding which is known as a major symptom of periodontal disease. Importantly, the gum bleeding is found to be closely associated with common systemic disorders in late adolescence such as asthma [20,21,22]. The importance of periodontal disease prevention from young age has been increasing [2]. According to the Japan Student Services Organization (JASSO: https://www.jasso.go.jp, accessed on 10 January 2023), the number of foreign students at Japanese institutions of higher education and Japanese language education is currently on the rise, although it is temporarily declining due to the COVID-19 pandemic. In a study by Ohsato et al. [ 23], authors analyzed the medical records of 554 subjects (138 international and 416 domestic university students) and found no significant difference in dental treatment history between international and domestic university students ($49.3\%$ and $48.8\%$, respectively). However, the incidence of dental caries was significantly higher in international university students than in domestic university students ($60.1\%$ and $49.0\%$, respectively). The indices of decayed, missing, and filled teeth (DMFT) were also significantly higher in international university students than in domestic university students (5.0 and 4.0, respectively). International university students were found to have a significantly higher dental caries morbidity rate than domestic students in Japan [23]. On the other hand, the differences in periodontal health between international and domestic university students are not yet well defined. In this study, the periodontal health status of international and domestic university students was compared. ## 2.1. Study Design and Population Clinical data of university students who visited a dental clinic at The University of Tokyo for screening purposes (not for symptomatic or dental treatment purposes) between April 2017 and March 2019 were retrospectively analyzed. Students who held Japanese nationality or who were permanent residents of Japan were classified into the group of domestic university students in Japan. Of the 374 university students who visited the dental clinic for initial dental checkups, the records of 231 university students under 25 years of age (including 79 international students) were included in the analysis. No specific undergraduate or graduate school students were targeted. Periodontal health status was determined by three dentists’ individual examinations on separate students for three parameters: probing pocket depth (PPD), bleeding on probing (BOP), and calculus grading scale (CGS) in the fully erupted permanent dentition excluding wisdom teeth. One identical dental hygienist was present during all examinations to ensure that the examinations were performed properly. A community periodontal index (CPI) probe (YDM, Tokyo, Japan) was used to measure each tooth at six sites (mesiobuccal, mid-buccal, distobuccal, distolingual, mid-lingual, and mesiolingual) for the evaluation of PPD and BOP [24]. The PPD value of each tooth was determined as the deepest of the six locations listed above. The PPD value of each student was determined as the mean value of the PPD of each tooth. For BOP, if bleeding was observed in even one location, the student was classified as having BOP. CGS was determined as follows: NONE: no calculus deposition (scored as 1), MILD: calculus deposition on less than one-half of the tooth surface (scored as 2), SEVERE: calculus deposition on more than one-half of the tooth surface and/or extending below the gingival margin (scored as 3) [23]. This study was approved by the Research Ethics Committee of the University of Tokyo (approval number 13–146): “Retrospective analyses of medical and health record information retained by the division for health service promotion, the University of Tokyo.” ## 2.2. Statistical Analyses Statistical analysis was performed using the χ2 test for BOP evaluation and the student’s t-test for PPD and CGS evaluation. A value of $p \leq 0.05$ (two-sided) was accepted as statistically significant. All the analyses were conducted using the statistical software program: Statistical Package for Social Sciences (SPSS version 21.0, IBM Corporation, Armonk, NY, USA). No statistical sample size calculations were conducted. ## 3.1. Region of Origin of International University Students The records of all university students under 25 years of age who visited a dental clinic for checkups (not for symptomatic or dental treatment) between April 2017 and March 2019 were analyzed. Of the total 231 university students, 152 were domestic students and 79 were international students. Of the international students, $84.8\%$ were from Asian countries ($$n = 67$$), which was the highest percentage, followed by North America and Europe, both $5.1\%$ ($$n = 4$$). In Asia, China accounted for $83.6\%$ ($$n = 56$$) of all Asian international university students, followed by South Korea with $6.0\%$ ($$n = 4$$), Singapore and Thailand both with $3.0\%$ ($$n = 2$$), and Hong Kong, Taiwan, and Malaysia with $1.5\%$ ($$n = 1$$). Among international university students, $45.6\%$ ($$n = 36$$) were male and $54.4\%$ ($$n = 43$$) were female; among domestic university students, $77.6\%$ ($$n = 118$$) were male, and $22.4\%$ ($$n = 34$$) were female (Figure 1). ## 3.2. Difference in Bleeding on Probing (BOP) between International and Domestic University Students in Japan The mean number of remaining teeth for all university students was 27.6 (maximum number of teeth: 28 excluding wisdom teeth). The mean number of remaining teeth for international students was 27.2, while that for domestic students was 27.7. The periodontal status of those remaining teeth was evaluated in this study. Overall, 91 of 231 ($39.4\%$) university students showed BOP. By gender, 60 of 154 ($39.0\%$) males and 31 of 77 ($40.3\%$) females had BOP. There were no significant differences between males and females. 39 of 79 ($49.4\%$) international university students and 52 of 152 ($34.2\%$) domestic university students showed BOP. The international university students showed a higher percentage of BOP than domestic university students ($p \leq 0.05$). Among international university students, females tended to exhibit BOP at a higher rate than males ($55.8\%$ and $41.7\%$, respectively: $$p \leq 0.21$$). On the other hand, among domestic university students, males showed BOP at a higher rate than females ($38.1\%$ and $20.6\%$, respectively: $$p \leq 0.057$$), although the difference was not significant (Figure 2, Supplementary Table S1). ## 3.3. Differences in Calculus Deposition between International and Domestic University Students The mean calculus grading score (CGS) of the total ($$n = 231$$) was 1.52. By gender, the mean CGS for males ($$n = 154$$) was 1.55, and for females ($$n = 77$$) was 1.45, showing no significant difference. The mean CGS of international university students ($$n = 79$$) was 1.68 and that of domestic university students ($$n = 152$$) was 1.43. The international university students showed more extensive calculus deposition than domestic students ($p \leq 0.01$). Among international university students, males tended to have higher CGS than females (1.83 and 1.56, respectively: $$p \leq 0.10$$). Similarly, among domestic university students, males tended to have higher CGS than females (1.46 and 1.32, respectively: $$p \leq 0.24$$) (Figure 3, Supplementary Table S1). ## 3.4. Difference in Probing Pocket Depth (PPD) Status between International and Domestic University Students The mean PPD of the total university students ($$n = 231$$) was 1.68 mm. By gender, the mean PPD for males ($$n = 154$$) was 1.67 mm and for females ($$n = 77$$) was 1.70 mm, showing no significant difference. The mean PPD of international university students ($$n = 79$$) was 1.77 mm and that of domestic university students ($$n = 152$$) was 1.64 mm, showing no significant difference between the two groups. There was no gender difference in PPD for either international university students or domestic university students (Figure 4, Supplementary Table S1). ## 3.5. The Association between BOP and PPD in International and Domestic University Students The mean PPD of the university students with BOP was 1.98 mm, and the mean PPD of the students without BOP was 1.49 mm, showing a large difference ($p \leq 0.001$). For international university students, the mean PPD for students with BOP was 2.05 mm, while the mean PPD for students without BOP was 1.49 mm. The mean PPD of international students with BOP was significantly larger than that of international students without BOP ($p \leq 0.001$). The mean PPD for domestic university students with BOP was 1.93 mm, and the mean PPD for domestic university students without BOP was 1.48 mm. The mean PPD of domestic students with BOP was significantly larger than that of domestic students without BOP ($p \leq 0.001$). Within the population that showed BOP, there was no significant difference in PPD between international and domestic students. Even among the population without BOP, there was no significant difference in PPD between international and domestic students (Figure 5, Supplementary Table S2). ## 4. Discussion The current study showed that international university students in Japan had a higher rate of bleeding on probing (BOP) and more extensive calculus deposition than domestic university students. Although probing pocket depth (PPD) was found to be at a physiological level for both international and domestic students, and no differences were observed, students with BOP showed significantly larger PPD values than those without BOP, regardless of international or domestic students. Oral diseases such as caries, periodontal disease, tooth loss, oral infections, oral cancer, and malocclusion are among the most prevalent diseases worldwide and carry serious health and economic burdens that significantly reduce the quality of life of those affected, and their impact is immeasurable [25]. Oral diseases, like most non-communicable diseases (NCDs), are chronic and susceptible to social context, such as economic status. Chronic untreated oral diseases often have serious consequences, not only in terms of pain and other painful symptoms and progression to systemic diseases (e.g., sepsis), but also in terms of reduced quality of life and work productivity. The cost of treating oral diseases also imposes a significant financial burden on households and health care systems [25]. Unfortunately, oral diseases have not been given much importance in global health policy, including in Japan, despite the fact that they are a global public health problem. In recent years, however, the need to treat oral diseases as an urgent priority for global health has begun to be stated [2,25,26,27,28]. Among oral diseases, periodontal disease is of particular public health importance because it occurs with such high frequency that it is estimated to affect $90\%$ of the world’s population [3]. Periodontal disease is the most common disease affecting tooth-supporting structures and is therefore a common cause of tooth loss [11,29,30,31]. In Japan, periodontal disease has replaced dental caries as the leading cause of tooth loss [7]. Importantly, periodontal disease has also been shown to be associated with a variety of systemic diseases including respiratory diseases [9,10], cardiovascular diseases [11,12,13], rheumatoid arthritis [14], diabetes [15], and a lot of other disorders [16,17,18,19,32]. Therefore, the importance of prevention and treatment of periodontal disease has been recognized by society and has become a focus of public health in recent years [25,28,33]. The relationship between systemic diseases and periodontal disease has been discussed mainly in middle-aged and older adults. Recently, however, it has been shown that late adolescents who have gingival bleeding are significantly more likely to suffer from systemic diseases such as asthma, otitis media/externa [21]. Based on the above, it is not surprising that the relationship between periodontal disease and lifestyle-related diseases such as diabetes and stroke, which are common in middle-aged and older adults, has already begun during late adolescence. Although subjective symptoms of periodontal disease usually become apparent after the age of 40s, it is common for young people to develop gingivitis, an early stage of periodontal disease, and in a survey of 17- to 19-year-old university students, $36.5\%$ of them complained of gingival bleeding [21]. This result suggests that one out of every three persons in their late teens already has gingivitis. In addition, Dental Health Division of Health Policy Bureau Ministry of Health in Japan reported that periodontal pockets rapidly become deeper after the age of 20 years [20]. This suggests that periodontal disease has already begun in late adolescence, indicating the need for periodontal disease countermeasures for young people [22]. According to the Japan Student Services Organization (JASSO: https://www.jasso.go.jp accessed on 10 January 2023), an independent administrative agency under the jurisdiction of the Ministry of Education, Culture, Sports, Science and Technology, the number of foreign students at Japanese institutions of higher education and Japanese language education is on the rise (although the situation is currently exceptional due to the COVID-19 pandemic), with the largest number of students from Asian countries. In this study, foreign students from Asia accounted for the largest proportion of foreign students ($84.8\%$), with students from China accounting for a particularly large share ($83.6\%$) of all foreign students from Asia. Therefore, the exclusion of international students from North America and Europe (together comprising $5.1\%$ of all international students) did not significantly change the results of the current study. It is not clear whether the percentage of foreign students from Europe and North America will increase in the future but, at present, foreign students from Asia are by far the largest group of foreign students in Japan. Therefore, it is necessary to consider many social factors such as differences in culture, customs, insurance systems, and medical services in order to provide better oral healthcare services to international students from Asian countries. Since there are still few studies comparing the oral health status of international students and domestic students in Japan, more data needs to be accumulated in the future [23,34]. In the current study, $34.2\%$ of domestic university students showed BOP, more than a third of the subjects, which was similar to our previous survey [21]. On the other hand, international university students showed a higher percentage of BOP ($49.4\%$) than domestic university students. A large difference was observed between international and domestic university students in BOP (Figure 2). Among international university students, women tended to have a higher percentage of BOP than men, while among domestic university students, male tended to have a higher percentage of BOP than female, consistent with the result of a previous survey [1]. Although this difference needs to be examined with a larger number of subjects, it is interesting because cultural differences and social backgrounds may be involved. In a study by Ohsato et al. [ 23], severe calculus deposition was observed in international university students ($51.9\%$) compared with domestic students ($31.7\%$) in Japan [23]. Similar results were obtained in the present study, with international university students showing more extensive calculus deposition than domestic students ($p \leq 0.01$) (Figure 3). This difference can be attributed to differences in food culture, socioeconomic differences, and lifestyle habits such as brushing teeth and dental visits. University students with BOP had greater PPD values than those without BOP, although, even for students with BOP, the depth of the gingival sulcus was at the physiological level: gingival pocket (Figure 5). The presence of BOP without periodontal pockets indicates the presence of gingivitis. Gingivitis is in a reversible stage in which healthy periodontal tissue can be restored [3,25,35,36,37]. Thus, some measures are needed to prevent the transition from gingivitis to periodontitis. What measures should be taken to prevent periodontitis in young people? Recently, it has been shown that the frequency and duration of tooth brushing affect gingival health in late adolescence [1,38]. In a survey of 9098 university students aged 17–19 years, regarding the frequency of tooth brushing, it was reported that the risk of gingival bleeding for university students who brush their teeth “less than once” is 2.36 times that of those who brush their teeth “three or more times,” and even for those who brush their teeth “twice” the risk of gingival bleeding is 1.45 times that of those who brush “three or more times.” Regarding the duration of tooth brushing, it is known that university students who brush their teeth “1 minute or less” have 1.57 times the risk of gingival bleeding compared to those who brush “4 minutes or more” and those who brush “2 to 3 minutes” have 1.26 times the risk compared to those who brush “4 minutes or more.” University students who brush their teeth less frequently and for less time have a higher risk of gingival bleeding. This result implies that the risk of periodontal disease decreases as the frequency and duration of tooth brushing increases. Therefore, in addition to dental checkups, it is important to raise oral hygiene and oral health awareness among the younger generation. However, unfortunately, the working-age population from high school graduation (age 18) to age 40 does not have opportunities to receive dental examinations or oral care instruction, except for special examinations limited to targeted occupations in Japan. Considering that periodontal disease begins in late adolescence and becomes apparent in the forties, it seems essential to establish a seamless oral hygiene management system that compensates for this gap period in Japan [2]. The current study revealed that international university students in Japan have poorer periodontal health status than domestic university students. Although the number of university students included in the study was not large and more studies with a larger number of students are needed, the results suggest that regular checkups and thorough oral care are essential for university students, especially international students, in order to prevent periodontitis. It has been suggested that the relationship between periodontal disease and systemic health status already occurs in late adolescence [21,39,40,41]. From the viewpoint of preventing systemic diseases, oral health care in late adolescence will become increasingly important in the future. ## 5. Conclusions International university students in Japan showed higher percentage of bleeding on probing (BOP) and more extensive calculus deposition than domestic university students, despite no significant difference in probing pocket depth (PPD). Students with BOP have significantly greater PPD values than those without BOP in both international and domestic students, although the values were at physiological levels. To prevent periodontitis, we have to pay more attention to the periodontal health care of university students, especially international students.
# Determinants of Deteriorated Self-Perceived Health Status among Informal Settlement Dwellers in South Africa ## Abstract Self-perceived health (SPH) is a widely used measure of health amongst individuals that indicates an individual’s overall subjective perception of their physical or mental health status. As rural to urban migration increases, the health of individuals within informal settlements becomes an increasing concern as these people are at high health and safety risk due to poor housing structures, overcrowding, poor sanitation and lack of services. This paper aimed to explore factors related to deteriorated SPH status among informal settlement dwellers in South Africa. This study used data from the first national representative Informal Settlements Survey in South Africa conducted by the Human Sciences Research Council (HSRC) in 2015. Stratified random sampling was applied to select informal settlements and households to participate in the study. Multivariate logistic regression and multinomial logistic regression analyses were performed to assess factors affecting deteriorated SPH among the informal settlement dwellers in South Africa. Informal settlement dwellers aged 30 to 39 years old (OR = 0.332 $95\%$CI [0.131–0.840], $p \leq 0.05$), those with ZAR 5501 and more household income per month (OR = 0.365 $95\%$CI [0.144–0.922], $p \leq 0.05$) and those who reported using drugs (OR = 0.069 $95\%$CI [0.020–0.240], $p \leq 0.001$) were significantly less likely to believe that their SPH status had deteriorated compared to the year preceding the survey than their counterparts. Those who reported always running out of food (OR = 3.120 $95\%$CI [1.258–7.737], $p \leq 0.05$) and those who reported having suffered from illness or injury in the past month preceding the survey (OR = 3.645 $95\%$CI [2.147–6.186], $p \leq 0.001$) were significantly more likely to believe that their SPH status had deteriorated compared to the year preceding the survey than their counterparts. In addition, those who were employed were significantly (OR = 1.830 $95\%$CI [1.001–3.347], $$p \leq 0.05$$) more likely to believe that their SPH status had deteriorated compared to the year preceding the survey than those who were unemployed with neutral SPH as a base category. Overall, the results from this study point to the importance of age, employment, income, lack of food, drug use and injury or illness as key determinants of SPH amongst informal settlement dwellers in South Africa. Given the rapid increasing number of informal settlements in the country, our findings do have implications for better understanding the drivers of deteriorating health in informal settlements. It is therefore recommended that these key factors be incorporated into future planning and policy development aimed at improving the standard of living and health of these vulnerable residents. ## 1. Introduction Rural–urban migration in Africa and South Africa, in particular, is a key contributor to the increase in people living in informal settlements. Whilst moving to these urban settlements holds the promise of a better lifestyle and economic opportunities, urban informal settlements in South Africa are often characterised by overcrowding, safety issues, unemployment, hunger, poor basic services delivery and inequalities [1,2,3,4]. The risks imposed by physical housing structures and living environments in informal settlements have considerable impacts on the health and well-being of these vulnerable groups, potentially exposing them to various diseases [2,5,6] and making them especially vulnerable during pandemics such as the COVID-19 pandemic [7,8]. It is anticipated that the implementation of Universal Health Coverage in South Africa, namely National Health Insurance (NHI), will have positive effects on the health of these informal settlement dwellers. For instance, it was reported that the Health Transformation Plan (HTP) had good effects on the health level of informal settlement residents in Iran by ensuring that they had insurance coverage and reducing many economic, social as well as cultural problems, with reduced out-of-pocket expenditures [9]. Previous studies show that informal settlement dwellers are more likely to self-report ill health and, due to the spatial and social marginalisation, are at an increased risk of experiencing mental health issues [7,10,11]. These vulnerable communities in informal settlements often find themselves further marginalised through labour policies that are not designed to accommodate them [8]. Self-perceived health (SPH), also commonly called self-reported health, self-rated health or self-assessed health, is a widely used and acceptable measure of health across individuals that has been applied both in international and South African studies [11]. Various studies have validated it as a good measure of health that is consistent with objective measures of health [11] and also as a strong predictor of mortality [12,13], morbidity [13,14] and healthcare use [15]. The World Health Organization (WHO) recognises it as one of the best measures of health [16]. SPH does not focus on one specific dimension of health, but rather it is used as an indicator of an individual’s overall subjective perception of their physical or mental health status. Thus, the presence of any health condition is a predictor of self-perception of health [17,18]. SPH is commonly measured using a single item health measure on a three- or five-point scale ranging from good to bad. Options can take the form of “very good, good, fair, bad, very bad”. Using this scale, individuals are then required to rate their health. The factors that influence SPH include health-related predictors, clinically diagnosed health status, history of chronic illnesses, lifestyle factors, socio-economic status and social factors [19,20,21,22]. Studies have described health status in relation to living environments within informal settlements in South Africa [2,3,4,6,10,23,24,25]. These studies show that a majority of informal settlement dwellers suffer a disproportionate burden of sickness and disease. Studies that have assessed the determinants of health in poor urban communities in South Africa have focused on a specific disease or a specific community [26,27,28]. There are some studies that have been undertaken to explore factors affecting poor SPH, even though some were not focused on informal settlements. For instance, Kasenda et al. [ 29] investigated the prevalence of poor SPH and its determinants among 962 participants in Malawi. Kasenda et al. [ 29] found that poor SPH was associated with being female, increasing age, decreasing education, frequent health care attendance as well as living with disability. Kasenda et al. [ 29] further reported that prevalence of poor SPH in Malawi was in line with findings from other countries. Mlangeni et al. [ 30] explored factors associated with poor SPH amongst individuals from KwaZulu-Natal using data from the 2012 South African national household survey. Mlangeni et al. [ 30] reported that fair/poor SPH was significantly associated with being older, HIV-positive, being an excessive drinker, being educated, being employed and not accessing care regularly. Mlangeni et al. [ 30] recommended that education, job opportunities, social services for poor living conditions and poor well-being, provision of health insurance as well as incorporating health promotion initiatives as part of social support and public services for substance abusers should be considered. Patterson et al. [ 31] assessed self-rated physical health and related factors in youth residing in slums or informal settlements in Uganda. Patterson et al. [ 31] found that poor self-rated physical health was significantly associated with older age, lower education, having been injured due to their drinking and having initiated alcohol use early, among others. Patterson et al. [ 31] further indicated that poor living conditions in the slums are exacerbated by a range of health concerns and risk behaviours, which impact youth’s physical health, which can adversely impact their long-term health and longevity if no interventions are undertaken. To the best of our knowledge, no nationally representative study has assessed the factors associated with SPH in informal settlements in South Africa, let alone deteriorated or poor SPH. The evaluation of factors associated with SPH in the context of living environments is essential for the design of strategies to improve health. This paper aims to expand on the existing body of literature on health in South African informal settlements by exploring the factors related to deteriorated SPH status among informal settlement dwellers in South Africa. The need to address these issues is entrenched in the United Nations Sustainable Development Goals (SGDs)—a set of internationally agreed goals and targets for sustainable development by 2030. SDG 3, which targets good health and well-being, can only be met through strategies that include informal settlements [32]. For SDG 3 to be met, living conditions need to be addressed as set out in SDG 11, which seeks to make cities inclusive, safe, resilient and sustainable. A study of this nature is also important because there is a lack of longitudinal studies that assess the impact of informal settlement upgrading or informal settlement housing and basic infrastructural service improvements on health in South Africa [2]. As the study focuses on informal settlements targeted for upgrades, it forms the basis for future studies that seek to explore the health benefits of these settlement upgrades. Furthermore, with the continued growth of informal settlements, it is important to assess the factors that influence SPH. Findings from this study could provide a narrative for policies and interventions targeted at improving population health in informal settlements. ## 2.1. Data This paper used data from the first national representative Informal Settlements Survey in South Africa conducted by the Human Sciences Research Council (HSRC) in 2015. For more details on the methods employed in the survey, please see Ndinda et al. [ 33]. Briefly, a stratified random sampling method was employed. The total number of informal settlements targeted for upgrading per province was recorded. This was used as an informal settlement sapling frame. The total number of informal settlements differed by province and only $10\%$ were sampled in each province. The number of households in each of the visited informal settlements across the country was generated using satellite imagery. This number of households per informal settlement was used as the sampling frame for household sampling. The total number of households differed by informal settlement and only a fixed number of 45 households were sampled in each informal settlement. This means that both informal settlements and households did not have equal chance of being sampled or selected. The data were weighted to correct this potential bias due to unequal sampling probabilities as well as in order to have a national representative of informal settlements targeted for upgrading in South Africa. The weights were applied using the realised sample in both cases, that is, visited informal settlements and interviewed households. A total of 75 informal settlements were successfully visited across the country (Figure 1). See Appendix A (Figure A1, Figure A2, Figure A3 and Figure A4) for some visual materials about the informal settlements. About 2380 household heads were interviewed using a semi-structured household questionnaire from these informal settlements. The informal settlement weight was calculated as the inverse of the probability of the informal settlement being realised in a province, while the household weight was calculated as the inverse of the probability of the household being interviewed in an informal settlement. The final weight was the product of informal settlement weight and household weight. A paper-based semi-structured household questionnaire was used for collection of the data and was administered by research assistants. The household questionnaire consisted of geographic particulars, household roster (demographics, education and economic activity of household members), living standard measure, health and nutrition, housing and tenure, access to services and crime and safety (Supplementary File [questionnaire] attached). In terms of exclusion and inclusion criteria, although a total of 2380 household respondents were interviewed in the whole survey, only 2242 respondents responded to the main outcome question, which asked about how their health was compared with one year prior to their taking the survey. Therefore, the final sample size that was considered for analysis for this paper was 2242. This is due to the fact that respondents were allowed to answer questions that they were willing to answer and they were told of their rights to not answer questions that they were not willing to answer. ## 2.2. Measures For the outcome variable, the SPH was considered. Respondents were asked how their health was compared with one year prior to their taking the survey with response options being: 1 = somewhat better, 2 = much better, 3 = about the same, 4 = much worse and 5 = somewhat worse. These options were further dichotomised into two: 1 = worse or deteriorated (much worse and somewhat worse) and 0 = better/about the same (somewhat better, much better and about the same) for multivariate logistic regression analysis. The reason behind dichotomising the outcome variable and using multivariate logistic regression was that this study focused on determinants of deteriorated SPH other than general SPH. A similar practice was noticed where the focus was on one aspect of SPH in previous studies [29,30,31,34,35,36,37]. For consideration of ordered regression logistic regression, the outcome variable was categorised into three groups: much worse or deteriorated (much worse and somewhat worse), neutral/about the same (about the same) and better/improved (somewhat better and much better). Explanatory variables included demographic factors such as sex (male or female), age (18–29, 30–39, 40–49, 50–59 and 60+) and marital status (married/cohabiting, divorced/widowed/separated and single/never married). Socioeconomic factors included education (no/primary school, secondary school and matric/higher), employment (unemployed or employed), household income per month (ZAR 0-ZAR 2000, ZAR 2001-ZAR 5500 and ZAR 5501 and more), whether the household has ever run out of food (yes or no) and Living Standard Measure (low, medium and high). Living Standard Measure was developed using Multiple Correspondence Analysis (MCA). The following 19 asset variables with yes response n > 100 were considered from 35 assets: fridge, deep freezer, VCR/DVD, cell phone, washing machine, internet access, electric/gas stove without oven, TV, radio, HI-FI, microwave oven MNET/DSTV, car, iron, electric/gas stove with oven, fan, mattress, bicycle and tools (see Appendix B). All asset variables were coded 0 = no and 1 = yes. Health-related and behavioural factors included illness or injury suffered in the past month prior to taking the survey (yes/no), tobacco use (yes/no), alcohol use (yes/no) and drug use (yes/no). ## 2.3. Data Analysis Data were analysed in Stata version 15.0 [38]. As indicated, the data were weighted to correct potential bias due to unequal sampling probabilities and to be able to generalise findings to a national representative of informal settlements targeted for upgrading in South Africa. The Stata “svy” command was used to incorporate these weights during data analysis. Differences in categorical variables were compared using Chi-square tests. Multivariate logistic regression analysis was performed to assess factors affecting deteriorated SPH among the informal settlement dwellers in South Africa. Furthermore, ordered regression logistic regression was considered to attain a better understanding of factors associated with deteriorated/worse SPH compared to the other two groups classified as neutral/about the same and better/improved separately, unlike in the case of the multivariate logistic regression wherein the two were grouped together. The Stata “omodel” command was performed to test the proportional odds assumption, and the results revealed that the proportional odds assumption was violated. Multinomial logistic regression analysis, which has been used for ordered outcome variables in previous studies [39,40,41,42], was therefore considered for further analysis. As the focus of this study was on determinants of deteriorated SPH, the two models were run with better/improved SPH being used as base category in the first model while the neutral SPH was the base category in the second model. Odds Ratios (ORs) were reported from the multivariate logistic regression and multinomial logistic regression. Confidence Intervals (CIs) were set at $95\%$, with a p value ≤ 0.05 considered statistically significant in all analyses. ## 3.1. Background Characteristics of Respondents The study sample used for this paper consisted of 2242 respondents. Males constituted $54.5\%$ of the sample while females accounted for $45.5\%$ (Table 1). There was no significant difference between males and females with $$p \leq 0.489.$$ The dominant age group was those aged 30 to 39 years-old at about $30\%$, followed by those aged 40 to 49 years-old at $26.1\%$. In terms of marital status, just below half ($48.1\%$) were married or cohabiting, followed by $43.8\%$ who were single or never married. No/primary school and secondary school accounted for around $37\%$ each. The majority of respondents, at $58.9\%$, fell under the ZAR 0 to ZAR 2000 household income band. Almost one third ($31.6\%$) of the informal settlement dwellers were smokers. ## 3.2. Deteriorated SPH Status among Informal Settlement Dwellers Table 2 highlights deteriorated SPH and explanatory factors among informal settlement dwellers across the country. Deteriorated SPH status was significantly higher among those with no/primary school ($19.6\%$) and those who did not use drugs ($15.3\%$) compared to their relevant counterparts. Informal settlement dwellers who never ran out of food ($10.0\%$) and those who did not experience illness or injury in the past month prior to taking the survey ($11.4\%$) were significantly less likely to believe that their SPH deteriorated compared to the year prior to taking the survey of their relevant counterparts. ## 3.3. Factors Influencing Deteriorated SPH Status among Informal Settlement Dwellers Informal settlement dwellers aged 30 to 39 years-old were significantly less (OR = 0.332 $95\%$CI [0.131–0.840], $p \leq 0.05$) likely to believe that their SPH status had deteriorated compared to the year preceding the survey than those aged 18 to 29 years-old (Table 3). Informal settlement dwellers with ZAR 5501 and more household income were significantly less (OR = 0.365 $95\%$CI [0.144–0.922], $p \leq 0.05$) likely to believe that their SPH status had deteriorated compared to the year preceding the survey than those in the ZAR 0 to ZAR 2000 household income band. Those who reported always running out of food were significantly more (OR = 3.120 $95\%$CI [1.258–7.737], $p \leq 0.05$) likely to believe that their SPH status had deteriorated compared to the year preceding the survey than those who never ran out of food. Residents who reported having suffered from illness or injury in the past month preceding the survey were significantly more (OR = 3.645 $95\%$CI [2.147–6.186], $p \leq 0.001$) likely to believe that their SPH status had deteriorated compared to the year preceding the survey than those who did not. Those who reported using drugs were significantly less (OR = 0.069 $95\%$CI [0.020–0.240], $p \leq 0.001$) likely to believe that their SPH status had deteriorated compared to the year preceding the survey than those who did not use drugs. Furthermore, multinomial logistic regression models showed that similar factors (age, ran out of food, injury or illness and drug use) were significantly associated with deteriorated SPH status among informal settlement dwellers, as was the case with multivariate logistic regression analysis (Table 4). The only difference is that household income was not significant in multinomial logistic regression models, and instead, employment was significant when neutral SPH was used as a base category. For instance, employed residents were significantly (OR = 1.830 $95\%$CI [1.001–3.347], $$p \leq 0.05$$) more likely to believe that their SPH status had deteriorated compared to the year preceding the survey than those who were unemployed with neutral SPH as a base category. ## 4. Discussion This paper’s aim was to investigate factors related to deteriorated SPH status among informal settlement residents in a national survey conducted in 2015 in South Africa. This study found that informal settlement residents within a certain age range (between 30 and 39 years), higher income bracket (>R5501) and demonstrating previous use of drugs were significantly less likely to report that their SPH had deteriorated compared to the previous year than their respective counterparts. Age has been found to be associated with SPH in previous studies [43,44]. This association between age and SPH is not consistent across all studies in the sense that the age ranges associated with SPH varies in different studies. For example, those aged 85 years and older were found to have higher SPH than those aged 64 to 75 years in one study [45], while other studies found no significant differences in SPH between those aged 75 and older and those aged between 35 and 44 years [46], and other studies generally found similarities in SPH across different age subgroups [22]. Bonner et al. [ 30] found that between $75\%$ and $86\%$ of those aged 40 years and older reported good health. Most participants in this study fell between 30 and 49 years old at $55.9\%$ of the total number. This relatively younger cohort might partly explain the significant perception that SPH had not deteriorated. Contrary to the findings of this study that residents that were employed were significantly more likely to report deteriorated SPH, Chola and Alaba [34] found that those employed were significantly more likely to report good SPH, while Mlangeni [30] also found those employed were significantly less likely to have fair/poor SPH compared to those who were unemployed. This finding might be caused by the fact that informal settlement residents are predominantly poor, so even those that are employed might be earning less, hence they are not far apart in terms of better wealth compared to their unemployed counterparts. However, this finding needs to be explored further as it is commonly known that poor residents who are unemployed are more likely report poor SPH, especially in the informal settlement setting. A higher income being associated with perceptions that health status had not deteriorated is consistent with previous studies. Research has shown that negative perceptions of environmental hazards were associated with poor self-perception in a low-income community [46]. Moreover, factors such as lower socio-economic status, living in slums, living in a low-income household and poverty were also associated with poor self-rated health [47,48]. Higher income seems to have had a protective effect against poor SPH status. The finding of those who reported using drugs having perceptions that health status had not deteriorated is inconsistent with what is found in the literature. Previous studies reported that the more drugs a person used, the greater is the likelihood of reporting poor SPH. In certain instances, users of opioids were found to have poorer self-rated health than other drug users [49], and those who frequently used drugs to cope had higher odds of reporting to be poor SPH [50]. A possible explanation for those who reported using drugs in our study having perceptions that their SPH status had not deteriorated could be perhaps they had consumed drugs at the time of the interview. This inebriated state would have been useful to mask the actual perceptions. In addition, a very small number of informal residents who indicated they used drugs reported that their SPH had deteriorated compared to the previous year preceding the survey. Therefore, this could also contribute to the inconsistent findings of this paper. Those who reported running out of food and those who had suffered from illness or injury in the past month were more likely to believe that their SPH status had deteriorated compared to the preceding year. People who are diagnosed to have clinical evidence of ill health or those who report morbidity are generally more likely to report poor SPH [51,52]. Poor SPH has also been shown to be associated with frailty and prefrailty in urban-living older adults [53]. The evidence suggests that factors which are more immediate and personal to the individual, such as if they are currently living with an ailment or not or if they are on any treatment, have a significant impact on the overall perception of wellbeing. SPH should be viewed as reflecting people’s lived experiences, their perceptions of health, access to healthcare and how these interact with lifestyle factors, and should also include biological factors such as sex [54]. This means that a more holistic view of health will have to be adopted, since it has been shown that people who live in informal settlements are constantly navigating structural constraints imposed by lack of access to amenities. More specifically, the state of the informal settlements earmarked for upgrading sampled in this study were characterized by a lack of basic services wherein as much as $52\%$ did not have access to electricity, $55\%$ used communal taps and $53\%$ used pit latrines [42]. This state of lack is likely to lead to distress and low self-esteem, which have been shown to be negatively associated with good health [22]. Therefore, when reporting on SPH, it is important to include variables that characterize and seek to incorporate both the physical and social environments [55]. The foremost goal of conducting this kind of research is to identify vulnerable groups and all the possible ways through which individuals and communities experience poor health [54]. The findings in this study identify some of the specific factors that can be targeted in designing interventions to improve the wellbeing of informal settlement residents in South Africa. These factors can be broadly categorized as structural (higher income, employment and running out of food) and individual (age, use of drugs and injury and illness) to help with the development of these interventions. The findings from this study also provide an overview of the general health conditions of residents of informal settlements targeted for upgrading in South Africa. It is therefore recommended that these key factors be incorporated into future planning and policy development aimed at improving the standard of living and health of these vulnerable residents. In addition, based on the findings of this research, the authors recommend that deteriorated or poor SPH should be considered as an indicator for poor health status especially where physical health examination is not financially feasible. Since the urban poor also make up the majority of the labour in the cities, labour legislation that makes provision for decent housing could help alleviate the structural and environmental influences on ill health and poor SPH [8]. Among the limitations of this study, it is important to bear in mind that SPH is subject to both recall bias and social desirability bias. However, the social desirability bias could have been likely mitigated by the need for improved services, thus generating a higher likelihood of more accurate responses. The people who participated are skewed towards unemployed and lower income groups; therefore, overestimating poor SPH is highly possible. However, the findings from this study provide a general picture of deteriorated SPH and related factors among informal settlements in South Africa. ## 5. Conclusions SPH is a widely used and validated measure of health that is applied in various literatures. This study contributes to the existing body of the literature on health in South African informal settlements by providing insight into the factors associated with deteriorated SPH status amongst informal settlement dwellers in South Africa. Informal settlement dwellers aged 30 to 39 years old, those with ZAR 5501 and more household income and those who reported using drugs were significantly less likely to believe that their SPH status had deteriorated compared to the year preceding the survey. Those who were employed, reported always running out of food and residents who reported having suffered from illness or injury in the past month preceding the survey were more likely to believe that their SPH status had deteriorated compared to the year preceding the survey. Given the rapidly increasing number of informal settlements across the country, especially in the metropolitan areas such as in Gauteng, Western Cape and KwaZulu-Natal, the evidence provided in this study is important for the development of interventions that work towards health improvement, such as health promotion and treatment programmes that aim to reduce illness and injury. It is therefore recommended that these key factors be incorporated into future planning and policy development aimed at improving the standard of living and health of these vulnerable residents. It is also recommended that deteriorated or poor SPH should be considered as another form of assessment of poor health status among informal settlement residents especially where regular physical health examinations are not possible.